Jul 6 23:51:40.902429 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:51:40.902459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:40.902473 kernel: BIOS-provided physical RAM map: Jul 6 23:51:40.902480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:51:40.902486 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:51:40.902493 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:51:40.902501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 6 23:51:40.902508 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 6 23:51:40.902514 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:51:40.902525 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:51:40.902547 kernel: NX (Execute Disable) protection: active Jul 6 23:51:40.902554 kernel: APIC: Static calls initialized Jul 6 23:51:40.902565 kernel: SMBIOS 2.8 present. Jul 6 23:51:40.902573 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 6 23:51:40.902581 kernel: Hypervisor detected: KVM Jul 6 23:51:40.902593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:51:40.902604 kernel: kvm-clock: using sched offset of 2709062859 cycles Jul 6 23:51:40.902612 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:51:40.902620 kernel: tsc: Detected 2494.138 MHz processor Jul 6 23:51:40.902629 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:51:40.902637 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:51:40.902645 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 6 23:51:40.902653 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:51:40.902661 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:51:40.902672 kernel: ACPI: Early table checksum verification disabled Jul 6 23:51:40.902679 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 6 23:51:40.902687 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902695 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902702 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902710 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 6 23:51:40.902718 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902725 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902733 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902744 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:40.902751 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 6 23:51:40.902759 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 6 23:51:40.902766 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 6 23:51:40.902774 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 6 23:51:40.902781 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 6 23:51:40.902789 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 6 23:51:40.902804 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 6 23:51:40.902812 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:51:40.902821 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:51:40.902829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:51:40.902837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:51:40.902848 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jul 6 23:51:40.902856 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jul 6 23:51:40.902868 kernel: Zone ranges: Jul 6 23:51:40.902877 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:51:40.902885 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 6 23:51:40.902893 kernel: Normal empty Jul 6 23:51:40.902901 kernel: Movable zone start for each node Jul 6 23:51:40.902909 kernel: Early memory node ranges Jul 6 23:51:40.902917 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:51:40.902925 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 6 23:51:40.902933 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 6 23:51:40.902944 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:51:40.902952 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:51:40.902962 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 6 23:51:40.902970 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:51:40.902978 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:51:40.902986 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:51:40.902994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:51:40.903002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:51:40.903010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:51:40.903021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:51:40.903029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:51:40.903037 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:51:40.903045 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:51:40.903053 kernel: TSC deadline timer available Jul 6 23:51:40.903061 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:51:40.903069 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:51:40.903077 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 6 23:51:40.903088 kernel: Booting paravirtualized kernel on KVM Jul 6 23:51:40.903096 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:51:40.903108 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:51:40.903116 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:51:40.903124 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:51:40.903132 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:51:40.903140 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:51:40.903150 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:40.903158 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:51:40.903170 kernel: random: crng init done Jul 6 23:51:40.903178 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:51:40.903186 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:51:40.903194 kernel: Fallback order for Node 0: 0 Jul 6 23:51:40.903202 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jul 6 23:51:40.903210 kernel: Policy zone: DMA32 Jul 6 23:51:40.903218 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:51:40.903226 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125148K reserved, 0K cma-reserved) Jul 6 23:51:40.903235 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:51:40.903246 kernel: Kernel/User page tables isolation: enabled Jul 6 23:51:40.903254 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:51:40.903262 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:51:40.903270 kernel: Dynamic Preempt: voluntary Jul 6 23:51:40.903278 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:51:40.903287 kernel: rcu: RCU event tracing is enabled. Jul 6 23:51:40.903295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:51:40.903304 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:51:40.903312 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:51:40.903324 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:51:40.903332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:51:40.903341 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:51:40.903349 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:51:40.903357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:51:40.903367 kernel: Console: colour VGA+ 80x25 Jul 6 23:51:40.903376 kernel: printk: console [tty0] enabled Jul 6 23:51:40.903384 kernel: printk: console [ttyS0] enabled Jul 6 23:51:40.903392 kernel: ACPI: Core revision 20230628 Jul 6 23:51:40.903400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:51:40.903412 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:51:40.903420 kernel: x2apic enabled Jul 6 23:51:40.903428 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:51:40.903436 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:51:40.903444 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 6 23:51:40.903453 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jul 6 23:51:40.903461 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 6 23:51:40.903469 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 6 23:51:40.903490 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:51:40.903499 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:51:40.903507 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:51:40.903520 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 6 23:51:40.903528 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:51:40.903567 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:51:40.903575 kernel: MDS: Mitigation: Clear CPU buffers Jul 6 23:51:40.903584 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:51:40.903593 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:51:40.903608 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:51:40.903617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:51:40.903626 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:51:40.903634 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:51:40.903643 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 6 23:51:40.903652 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:51:40.903660 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:51:40.903669 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:51:40.903682 kernel: landlock: Up and running. Jul 6 23:51:40.903691 kernel: SELinux: Initializing. Jul 6 23:51:40.903699 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:51:40.903708 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:51:40.903717 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 6 23:51:40.903725 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:40.903734 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:40.903743 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:40.903752 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 6 23:51:40.903764 kernel: signal: max sigframe size: 1776 Jul 6 23:51:40.903773 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:51:40.903782 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:51:40.903790 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:51:40.903799 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:51:40.903808 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:51:40.903816 kernel: .... node #0, CPUs: #1 Jul 6 23:51:40.903825 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:51:40.903836 kernel: smpboot: Max logical packages: 1 Jul 6 23:51:40.903849 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jul 6 23:51:40.903858 kernel: devtmpfs: initialized Jul 6 23:51:40.903866 kernel: x86/mm: Memory block size: 128MB Jul 6 23:51:40.903875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:51:40.903884 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:51:40.903892 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:51:40.903901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:51:40.903910 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:51:40.903919 kernel: audit: type=2000 audit(1751845900.686:1): state=initialized audit_enabled=0 res=1 Jul 6 23:51:40.903931 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:51:40.903939 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:51:40.903948 kernel: cpuidle: using governor menu Jul 6 23:51:40.903957 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:51:40.903966 kernel: dca service started, version 1.12.1 Jul 6 23:51:40.903974 kernel: PCI: Using configuration type 1 for base access Jul 6 23:51:40.903983 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:51:40.903992 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:51:40.904001 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:51:40.904013 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:51:40.904022 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:51:40.904030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:51:40.904039 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:51:40.904048 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:51:40.904056 kernel: ACPI: Interpreter enabled Jul 6 23:51:40.904065 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:51:40.904074 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:51:40.904083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:51:40.904095 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:51:40.904104 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:51:40.904112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:51:40.904314 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:51:40.904423 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:51:40.904519 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:51:40.904531 kernel: acpiphp: Slot [3] registered Jul 6 23:51:40.904569 kernel: acpiphp: Slot [4] registered Jul 6 23:51:40.904578 kernel: acpiphp: Slot [5] registered Jul 6 23:51:40.904587 kernel: acpiphp: Slot [6] registered Jul 6 23:51:40.904596 kernel: acpiphp: Slot [7] registered Jul 6 23:51:40.904604 kernel: acpiphp: Slot [8] registered Jul 6 23:51:40.904613 kernel: acpiphp: Slot [9] registered Jul 6 23:51:40.904621 kernel: acpiphp: Slot [10] registered Jul 6 23:51:40.904630 kernel: acpiphp: Slot [11] registered Jul 6 23:51:40.904639 kernel: acpiphp: Slot [12] registered Jul 6 23:51:40.904651 kernel: acpiphp: Slot [13] registered Jul 6 23:51:40.904660 kernel: acpiphp: Slot [14] registered Jul 6 23:51:40.904669 kernel: acpiphp: Slot [15] registered Jul 6 23:51:40.904677 kernel: acpiphp: Slot [16] registered Jul 6 23:51:40.904686 kernel: acpiphp: Slot [17] registered Jul 6 23:51:40.904695 kernel: acpiphp: Slot [18] registered Jul 6 23:51:40.904703 kernel: acpiphp: Slot [19] registered Jul 6 23:51:40.904712 kernel: acpiphp: Slot [20] registered Jul 6 23:51:40.904720 kernel: acpiphp: Slot [21] registered Jul 6 23:51:40.904729 kernel: acpiphp: Slot [22] registered Jul 6 23:51:40.904741 kernel: acpiphp: Slot [23] registered Jul 6 23:51:40.904749 kernel: acpiphp: Slot [24] registered Jul 6 23:51:40.904758 kernel: acpiphp: Slot [25] registered Jul 6 23:51:40.904767 kernel: acpiphp: Slot [26] registered Jul 6 23:51:40.904776 kernel: acpiphp: Slot [27] registered Jul 6 23:51:40.904785 kernel: acpiphp: Slot [28] registered Jul 6 23:51:40.904794 kernel: acpiphp: Slot [29] registered Jul 6 23:51:40.904802 kernel: acpiphp: Slot [30] registered Jul 6 23:51:40.904811 kernel: acpiphp: Slot [31] registered Jul 6 23:51:40.904824 kernel: PCI host bridge to bus 0000:00 Jul 6 23:51:40.904936 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:51:40.905027 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:51:40.905128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:51:40.905226 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:51:40.905313 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 6 23:51:40.905400 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:51:40.905543 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:51:40.905677 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:51:40.905791 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 6 23:51:40.905890 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 6 23:51:40.905991 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:51:40.906092 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:51:40.906188 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:51:40.906290 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:51:40.906400 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 6 23:51:40.906527 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 6 23:51:40.906651 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 6 23:51:40.906747 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 6 23:51:40.906844 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 6 23:51:40.906976 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:51:40.907081 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 6 23:51:40.907180 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 6 23:51:40.907278 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 6 23:51:40.907373 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 6 23:51:40.907469 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:51:40.907584 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:51:40.907690 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 6 23:51:40.907786 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 6 23:51:40.907882 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 6 23:51:40.907992 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:51:40.908092 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 6 23:51:40.908189 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 6 23:51:40.908290 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 6 23:51:40.908403 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 6 23:51:40.908502 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 6 23:51:40.908626 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 6 23:51:40.908724 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 6 23:51:40.908833 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:51:40.908932 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:51:40.909033 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 6 23:51:40.909150 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 6 23:51:40.909269 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:51:40.909369 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 6 23:51:40.909465 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 6 23:51:40.909589 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 6 23:51:40.909703 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 6 23:51:40.909808 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 6 23:51:40.909909 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 6 23:51:40.909921 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:51:40.909931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:51:40.909941 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:51:40.909950 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:51:40.909959 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:51:40.909973 kernel: iommu: Default domain type: Translated Jul 6 23:51:40.909982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:51:40.909992 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:51:40.910001 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:51:40.910010 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:51:40.910019 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 6 23:51:40.910116 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 6 23:51:40.910214 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 6 23:51:40.910309 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:51:40.910326 kernel: vgaarb: loaded Jul 6 23:51:40.910335 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:51:40.910344 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:51:40.910353 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:51:40.910362 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:51:40.910371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:51:40.910380 kernel: pnp: PnP ACPI init Jul 6 23:51:40.910389 kernel: pnp: PnP ACPI: found 4 devices Jul 6 23:51:40.910398 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:51:40.910411 kernel: NET: Registered PF_INET protocol family Jul 6 23:51:40.910420 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:51:40.910429 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:51:40.910439 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:51:40.910448 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:51:40.910457 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:51:40.910466 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:51:40.910475 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:51:40.910483 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:51:40.910496 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:51:40.910504 kernel: NET: Registered PF_XDP protocol family Jul 6 23:51:40.910608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:51:40.910696 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:51:40.910782 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:51:40.910869 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:51:40.910981 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 6 23:51:40.911098 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 6 23:51:40.911209 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:51:40.911223 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:51:40.911325 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29239 usecs Jul 6 23:51:40.911337 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:51:40.911347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:51:40.911356 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 6 23:51:40.911365 kernel: Initialise system trusted keyrings Jul 6 23:51:40.911374 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:51:40.911388 kernel: Key type asymmetric registered Jul 6 23:51:40.911397 kernel: Asymmetric key parser 'x509' registered Jul 6 23:51:40.911406 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:51:40.911415 kernel: io scheduler mq-deadline registered Jul 6 23:51:40.911424 kernel: io scheduler kyber registered Jul 6 23:51:40.911433 kernel: io scheduler bfq registered Jul 6 23:51:40.911443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:51:40.911452 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 6 23:51:40.911461 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 6 23:51:40.911470 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 6 23:51:40.911483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:51:40.911492 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:51:40.911501 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:51:40.911510 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:51:40.911519 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:51:40.911528 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:51:40.911678 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:51:40.911772 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:51:40.911903 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:51:40 UTC (1751845900) Jul 6 23:51:40.912001 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:51:40.912014 kernel: intel_pstate: CPU model not supported Jul 6 23:51:40.912023 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:51:40.912032 kernel: Segment Routing with IPv6 Jul 6 23:51:40.912041 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:51:40.912051 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:51:40.912060 kernel: Key type dns_resolver registered Jul 6 23:51:40.912074 kernel: IPI shorthand broadcast: enabled Jul 6 23:51:40.912083 kernel: sched_clock: Marking stable (864002610, 81543439)->(1036851819, -91305770) Jul 6 23:51:40.912092 kernel: registered taskstats version 1 Jul 6 23:51:40.912101 kernel: Loading compiled-in X.509 certificates Jul 6 23:51:40.912111 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:51:40.912120 kernel: Key type .fscrypt registered Jul 6 23:51:40.912129 kernel: Key type fscrypt-provisioning registered Jul 6 23:51:40.912138 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:51:40.912147 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:51:40.912159 kernel: ima: No architecture policies found Jul 6 23:51:40.912169 kernel: clk: Disabling unused clocks Jul 6 23:51:40.912178 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:51:40.912187 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:51:40.912197 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:51:40.912233 kernel: Run /init as init process Jul 6 23:51:40.912246 kernel: with arguments: Jul 6 23:51:40.912256 kernel: /init Jul 6 23:51:40.912266 kernel: with environment: Jul 6 23:51:40.912278 kernel: HOME=/ Jul 6 23:51:40.912287 kernel: TERM=linux Jul 6 23:51:40.912296 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:51:40.912309 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:51:40.912321 systemd[1]: Detected virtualization kvm. Jul 6 23:51:40.912331 systemd[1]: Detected architecture x86-64. Jul 6 23:51:40.912341 systemd[1]: Running in initrd. Jul 6 23:51:40.912350 systemd[1]: No hostname configured, using default hostname. Jul 6 23:51:40.912364 systemd[1]: Hostname set to . Jul 6 23:51:40.912375 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:51:40.912385 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:51:40.912394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:40.912404 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:40.912415 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:51:40.912424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:51:40.912438 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:51:40.912448 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:51:40.912460 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:51:40.912470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:51:40.912479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:40.912489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:40.912499 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:51:40.912512 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:51:40.912522 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:51:40.912532 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:51:40.912569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:51:40.912579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:51:40.912589 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:51:40.912602 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:51:40.912612 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:40.912621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:40.912632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:40.912643 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:51:40.912653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:51:40.912662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:51:40.912672 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:51:40.912686 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:51:40.912696 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:51:40.912706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:51:40.912716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:40.912726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:51:40.912770 systemd-journald[183]: Collecting audit messages is disabled. Jul 6 23:51:40.912800 systemd-journald[183]: Journal started Jul 6 23:51:40.912822 systemd-journald[183]: Runtime Journal (/run/log/journal/1b899485577a48e5b6d3b83f7f44d690) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:51:40.918554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:40.918625 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:51:40.919486 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:51:40.924878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:51:40.927647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:51:40.947919 systemd-modules-load[184]: Inserted module 'overlay' Jul 6 23:51:40.981511 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:51:40.981625 kernel: Bridge firewalling registered Jul 6 23:51:40.954774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:51:40.979100 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 6 23:51:40.982285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:40.985777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:40.986668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:40.992839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:40.995734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:51:40.997029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:51:41.015877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:41.025811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:51:41.026505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:41.027075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:41.037804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:51:41.043089 dracut-cmdline[215]: dracut-dracut-053 Jul 6 23:51:41.046869 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:41.082316 systemd-resolved[219]: Positive Trust Anchors: Jul 6 23:51:41.082333 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:51:41.082369 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:51:41.085665 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 6 23:51:41.087499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:51:41.088347 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:41.152591 kernel: SCSI subsystem initialized Jul 6 23:51:41.162574 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:51:41.173589 kernel: iscsi: registered transport (tcp) Jul 6 23:51:41.195796 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:51:41.195921 kernel: QLogic iSCSI HBA Driver Jul 6 23:51:41.249104 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:51:41.255817 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:51:41.283828 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:51:41.283923 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:51:41.284685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:51:41.328585 kernel: raid6: avx2x4 gen() 17644 MB/s Jul 6 23:51:41.345595 kernel: raid6: avx2x2 gen() 17847 MB/s Jul 6 23:51:41.362652 kernel: raid6: avx2x1 gen() 13224 MB/s Jul 6 23:51:41.362736 kernel: raid6: using algorithm avx2x2 gen() 17847 MB/s Jul 6 23:51:41.380717 kernel: raid6: .... xor() 19575 MB/s, rmw enabled Jul 6 23:51:41.380815 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:51:41.403571 kernel: xor: automatically using best checksumming function avx Jul 6 23:51:41.561577 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:51:41.575723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:51:41.581878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:41.598600 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 6 23:51:41.604392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:41.611989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:51:41.634925 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 6 23:51:41.674361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:51:41.678742 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:51:41.744347 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:41.751836 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:51:41.782543 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:51:41.785697 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:51:41.787129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:41.787926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:51:41.796213 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:51:41.831228 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:51:41.844584 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 6 23:51:41.854560 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:51:41.871526 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 6 23:51:41.877255 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:51:41.887847 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:51:41.887939 kernel: AES CTR mode by8 optimization enabled Jul 6 23:51:41.898654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:51:41.898719 kernel: GPT:9289727 != 125829119 Jul 6 23:51:41.901349 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:51:41.901645 kernel: GPT:9289727 != 125829119 Jul 6 23:51:41.901661 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:51:41.901674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:41.911594 kernel: libata version 3.00 loaded. Jul 6 23:51:41.918941 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:51:41.919067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:41.925190 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 6 23:51:41.925398 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 6 23:51:41.920290 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:41.928914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:41.930486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:41.930975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:41.940515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:41.947666 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 6 23:51:41.947972 kernel: ACPI: bus type USB registered Jul 6 23:51:41.947997 kernel: usbcore: registered new interface driver usbfs Jul 6 23:51:41.948846 kernel: usbcore: registered new interface driver hub Jul 6 23:51:41.950936 kernel: usbcore: registered new device driver usb Jul 6 23:51:41.950993 kernel: scsi host1: ata_piix Jul 6 23:51:41.964397 kernel: scsi host2: ata_piix Jul 6 23:51:41.964738 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 6 23:51:41.964764 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 6 23:51:41.986606 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 6 23:51:41.986891 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 6 23:51:41.987057 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 6 23:51:41.987198 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 6 23:51:41.993598 kernel: hub 1-0:1.0: USB hub found Jul 6 23:51:41.993838 kernel: hub 1-0:1.0: 2 ports detected Jul 6 23:51:42.016845 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (454) Jul 6 23:51:42.021577 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Jul 6 23:51:42.022780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:51:42.048447 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:51:42.049330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:42.054513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:51:42.054983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:51:42.060639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:51:42.065889 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:51:42.068735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:42.074920 disk-uuid[540]: Primary Header is updated. Jul 6 23:51:42.074920 disk-uuid[540]: Secondary Entries is updated. Jul 6 23:51:42.074920 disk-uuid[540]: Secondary Header is updated. Jul 6 23:51:42.079381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:42.091929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:42.100912 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:43.087575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:43.088856 disk-uuid[541]: The operation has completed successfully. Jul 6 23:51:43.137050 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:51:43.137239 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:51:43.143857 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:51:43.150693 sh[561]: Success Jul 6 23:51:43.166786 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:51:43.228810 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:51:43.237672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:51:43.241869 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:51:43.268728 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:51:43.268809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:43.269960 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:51:43.273766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:51:43.273892 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:51:43.283167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:51:43.284576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:51:43.292796 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:51:43.296851 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:51:43.310592 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:43.310687 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:43.310712 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:43.316594 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:43.329431 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:51:43.331277 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:43.337816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:51:43.345855 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:51:43.427341 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:51:43.438934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:51:43.469149 systemd-networkd[746]: lo: Link UP Jul 6 23:51:43.469160 systemd-networkd[746]: lo: Gained carrier Jul 6 23:51:43.472073 systemd-networkd[746]: Enumeration completed Jul 6 23:51:43.472513 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:51:43.472518 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 6 23:51:43.472739 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:51:43.473530 systemd[1]: Reached target network.target - Network. Jul 6 23:51:43.475508 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:51:43.475514 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:51:43.476854 systemd-networkd[746]: eth0: Link UP Jul 6 23:51:43.476859 systemd-networkd[746]: eth0: Gained carrier Jul 6 23:51:43.476880 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:51:43.483785 systemd-networkd[746]: eth1: Link UP Jul 6 23:51:43.483790 systemd-networkd[746]: eth1: Gained carrier Jul 6 23:51:43.483806 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:51:43.502682 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Jul 6 23:51:43.508706 systemd-networkd[746]: eth0: DHCPv4 address 209.38.68.255/20, gateway 209.38.64.1 acquired from 169.254.169.253 Jul 6 23:51:43.513565 ignition[654]: Ignition 2.19.0 Jul 6 23:51:43.513603 ignition[654]: Stage: fetch-offline Jul 6 23:51:43.513687 ignition[654]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:43.515731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:51:43.513700 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:43.513887 ignition[654]: parsed url from cmdline: "" Jul 6 23:51:43.513891 ignition[654]: no config URL provided Jul 6 23:51:43.513898 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:51:43.513906 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:51:43.513914 ignition[654]: failed to fetch config: resource requires networking Jul 6 23:51:43.514155 ignition[654]: Ignition finished successfully Jul 6 23:51:43.521776 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:51:43.555617 ignition[756]: Ignition 2.19.0 Jul 6 23:51:43.555635 ignition[756]: Stage: fetch Jul 6 23:51:43.555958 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:43.555977 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:43.556158 ignition[756]: parsed url from cmdline: "" Jul 6 23:51:43.556165 ignition[756]: no config URL provided Jul 6 23:51:43.556174 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:51:43.556189 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:51:43.556220 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 6 23:51:43.600392 ignition[756]: GET result: OK Jul 6 23:51:43.600655 ignition[756]: parsing config with SHA512: 5e4c6a30f06adc79a62c7fcb588aa72a12aaa6a10ae1329e0485086de032aafc5da5552a9f0e6ea31d1245f68e252dbe1a867fa0f9d1736025d6e8b6b58e29ed Jul 6 23:51:43.610085 unknown[756]: fetched base config from "system" Jul 6 23:51:43.610108 unknown[756]: fetched base config from "system" Jul 6 23:51:43.611133 ignition[756]: fetch: fetch complete Jul 6 23:51:43.610119 unknown[756]: fetched user config from "digitalocean" Jul 6 23:51:43.611142 ignition[756]: fetch: fetch passed Jul 6 23:51:43.611224 ignition[756]: Ignition finished successfully Jul 6 23:51:43.613753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:51:43.619864 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:51:43.641028 ignition[763]: Ignition 2.19.0 Jul 6 23:51:43.641044 ignition[763]: Stage: kargs Jul 6 23:51:43.641378 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:43.641396 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:43.643053 ignition[763]: kargs: kargs passed Jul 6 23:51:43.643117 ignition[763]: Ignition finished successfully Jul 6 23:51:43.646489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:51:43.657845 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:51:43.679956 ignition[769]: Ignition 2.19.0 Jul 6 23:51:43.679972 ignition[769]: Stage: disks Jul 6 23:51:43.680289 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:43.680307 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:43.682770 ignition[769]: disks: disks passed Jul 6 23:51:43.682888 ignition[769]: Ignition finished successfully Jul 6 23:51:43.684094 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:51:43.688838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:51:43.689677 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:51:43.690600 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:51:43.691439 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:51:43.692240 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:51:43.697875 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:51:43.726248 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:51:43.728687 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:51:43.738781 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:51:43.838646 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:51:43.839666 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:51:43.840994 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:51:43.849902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:51:43.853774 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:51:43.855927 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 6 23:51:43.862570 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Jul 6 23:51:43.868527 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:43.868653 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:43.868670 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:43.870800 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:51:43.872490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:51:43.872548 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:51:43.876555 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:43.879831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:51:43.881852 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:51:43.890898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:51:43.957968 coreos-metadata[788]: Jul 06 23:51:43.957 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:43.960656 coreos-metadata[789]: Jul 06 23:51:43.960 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:43.967889 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:51:43.971098 coreos-metadata[788]: Jul 06 23:51:43.971 INFO Fetch successful Jul 6 23:51:43.973327 coreos-metadata[789]: Jul 06 23:51:43.973 INFO Fetch successful Jul 6 23:51:43.981244 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 6 23:51:43.982451 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:51:43.981413 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 6 23:51:43.985807 coreos-metadata[789]: Jul 06 23:51:43.982 INFO wrote hostname ci-4081.3.4-c-43d64a8ca6 to /sysroot/etc/hostname Jul 6 23:51:43.986326 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:51:43.991470 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:51:43.998326 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:51:44.119886 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:51:44.129781 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:51:44.136574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:51:44.148587 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:44.188385 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:51:44.192244 ignition[907]: INFO : Ignition 2.19.0 Jul 6 23:51:44.193649 ignition[907]: INFO : Stage: mount Jul 6 23:51:44.193649 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:44.193649 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:44.195667 ignition[907]: INFO : mount: mount passed Jul 6 23:51:44.196036 ignition[907]: INFO : Ignition finished successfully Jul 6 23:51:44.197559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:51:44.202858 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:51:44.268581 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:51:44.274967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:51:44.300591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Jul 6 23:51:44.304086 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:44.304199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:44.304221 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:44.311752 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:44.313825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:51:44.350401 ignition[936]: INFO : Ignition 2.19.0 Jul 6 23:51:44.350401 ignition[936]: INFO : Stage: files Jul 6 23:51:44.351937 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:44.351937 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:44.351937 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:51:44.353946 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:51:44.353946 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:51:44.356605 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:51:44.357350 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:51:44.357350 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:51:44.357193 unknown[936]: wrote ssh authorized keys file for user: core Jul 6 23:51:44.359805 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:51:44.359805 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:51:44.399933 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:51:44.649213 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:51:44.649213 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:51:44.650887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:51:44.655668 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:51:45.226792 systemd-networkd[746]: eth1: Gained IPv6LL Jul 6 23:51:45.291912 systemd-networkd[746]: eth0: Gained IPv6LL Jul 6 23:51:45.353563 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:51:45.750586 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:51:45.750586 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:51:45.752645 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:51:45.752645 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:51:45.752645 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:51:45.752645 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:51:45.752645 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:51:45.756489 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:51:45.756489 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:51:45.756489 ignition[936]: INFO : files: files passed Jul 6 23:51:45.756489 ignition[936]: INFO : Ignition finished successfully Jul 6 23:51:45.756091 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:51:45.764824 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:51:45.767747 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:51:45.772168 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:51:45.772314 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:51:45.795435 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:45.795435 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:45.797760 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:45.798404 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:51:45.799406 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:51:45.801744 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:51:45.838869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:51:45.839044 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:51:45.840720 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:51:45.841284 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:51:45.842321 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:51:45.847816 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:51:45.870807 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:51:45.878806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:51:45.898467 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:45.899575 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:45.900103 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:51:45.900794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:51:45.900961 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:51:45.901863 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:51:45.902801 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:51:45.903332 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:51:45.904085 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:51:45.904890 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:51:45.905869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:51:45.906657 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:51:45.907700 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:51:45.908496 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:51:45.909317 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:51:45.910164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:51:45.910485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:51:45.911571 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:45.912576 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:45.913097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:51:45.913359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:45.914154 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:51:45.914395 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:51:45.915558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:51:45.915837 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:51:45.916793 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:51:45.917056 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:51:45.918116 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:51:45.918230 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:51:45.928911 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:51:45.933812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:51:45.934864 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:51:45.935126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:45.935701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:51:45.935826 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:51:45.942883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:51:45.942987 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:51:45.953184 ignition[988]: INFO : Ignition 2.19.0 Jul 6 23:51:45.953184 ignition[988]: INFO : Stage: umount Jul 6 23:51:45.954204 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:45.954204 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:45.955784 ignition[988]: INFO : umount: umount passed Jul 6 23:51:45.955784 ignition[988]: INFO : Ignition finished successfully Jul 6 23:51:45.956927 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:51:45.957036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:51:45.957694 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:51:45.957742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:51:45.958085 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:51:45.958122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:51:45.958444 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:51:45.958480 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:51:45.960365 systemd[1]: Stopped target network.target - Network. Jul 6 23:51:45.964926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:51:45.965015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:51:45.965568 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:51:45.965852 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:51:45.966617 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:45.967062 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:51:45.967345 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:51:45.967664 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:51:45.967716 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:51:45.968163 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:51:45.968221 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:51:45.970825 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:51:45.970903 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:51:45.971382 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:51:45.971423 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:51:45.971944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:51:45.974838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:51:45.976602 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:51:45.976722 systemd-networkd[746]: eth0: DHCPv6 lease lost Jul 6 23:51:45.977220 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:51:45.977333 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:51:45.980480 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:51:45.981291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:51:45.982105 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:51:45.982228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:51:45.982635 systemd-networkd[746]: eth1: DHCPv6 lease lost Jul 6 23:51:45.985783 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:51:45.985934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:51:45.987420 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:51:45.987487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:45.998091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:51:45.998456 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:51:45.998524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:51:45.998953 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:51:45.999003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:45.999441 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:51:45.999485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:45.999867 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:51:45.999908 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:46.000644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:46.013921 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:51:46.014522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:51:46.017478 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:51:46.017736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:46.018672 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:51:46.018715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:46.019163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:51:46.019218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:46.019868 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:51:46.019918 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:51:46.020901 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:51:46.020966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:51:46.022148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:51:46.022216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:46.027839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:51:46.028298 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:51:46.028366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:46.029125 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:51:46.029190 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:51:46.031377 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:51:46.031434 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:46.032060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:46.032105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:46.040319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:51:46.040460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:51:46.041676 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:51:46.045744 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:51:46.063643 systemd[1]: Switching root. Jul 6 23:51:46.106617 systemd-journald[183]: Journal stopped Jul 6 23:51:47.171303 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 6 23:51:47.171429 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:51:47.171462 kernel: SELinux: policy capability open_perms=1 Jul 6 23:51:47.171490 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:51:47.171508 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:51:47.173621 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:51:47.173701 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:51:47.173730 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:51:47.173754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:51:47.173774 kernel: audit: type=1403 audit(1751845906.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:51:47.173797 systemd[1]: Successfully loaded SELinux policy in 40.210ms. Jul 6 23:51:47.173823 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.307ms. Jul 6 23:51:47.173846 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:51:47.173866 systemd[1]: Detected virtualization kvm. Jul 6 23:51:47.175559 systemd[1]: Detected architecture x86-64. Jul 6 23:51:47.175597 systemd[1]: Detected first boot. Jul 6 23:51:47.175618 systemd[1]: Hostname set to . Jul 6 23:51:47.175638 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:51:47.175658 zram_generator::config[1030]: No configuration found. Jul 6 23:51:47.177578 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:51:47.177624 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:51:47.177644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:51:47.177670 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:51:47.177690 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:51:47.177708 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:51:47.177726 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:51:47.177748 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:51:47.177769 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:51:47.177788 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:51:47.177808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:51:47.177832 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:51:47.177852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:47.177870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:47.177888 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:51:47.177906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:51:47.177924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:51:47.177942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:51:47.177960 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:51:47.177979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:47.178004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:51:47.178023 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:51:47.178042 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:51:47.178061 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:51:47.178082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:47.178103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:51:47.178122 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:51:47.178145 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:51:47.178164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:51:47.178183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:51:47.178203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:47.178223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:47.178243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:47.178263 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:51:47.178289 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:51:47.178306 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:51:47.178329 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:51:47.178345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:47.178357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:51:47.178370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:51:47.178383 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:51:47.178396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:51:47.178408 systemd[1]: Reached target machines.target - Containers. Jul 6 23:51:47.178421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:51:47.178437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:47.178452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:51:47.178464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:51:47.178476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:47.178489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:51:47.178502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:47.178515 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:51:47.178528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:47.180611 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:51:47.180645 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:51:47.180659 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:51:47.180672 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:51:47.180685 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:51:47.180699 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:51:47.180712 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:51:47.180724 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:51:47.180737 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:51:47.180752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:51:47.180765 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:51:47.180778 systemd[1]: Stopped verity-setup.service. Jul 6 23:51:47.180792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:47.180805 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:51:47.180818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:51:47.180831 kernel: loop: module loaded Jul 6 23:51:47.180844 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:51:47.180856 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:51:47.180873 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:51:47.180891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:51:47.180909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:47.180934 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:51:47.180952 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:51:47.180971 kernel: fuse: init (API version 7.39) Jul 6 23:51:47.180988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:47.181008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:47.181028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:47.181045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:47.181062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:51:47.181076 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:51:47.181089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:47.181101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:47.181133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:51:47.181151 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:51:47.181205 systemd-journald[1099]: Collecting audit messages is disabled. Jul 6 23:51:47.181232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:51:47.181250 systemd-journald[1099]: Journal started Jul 6 23:51:47.181274 systemd-journald[1099]: Runtime Journal (/run/log/journal/1b899485577a48e5b6d3b83f7f44d690) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:51:47.188613 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:51:46.868687 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:51:46.897509 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:51:46.897953 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:51:47.192561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:47.219691 kernel: ACPI: bus type drm_connector registered Jul 6 23:51:47.219787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:51:47.226567 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:51:47.227531 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:51:47.228616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:51:47.230600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:47.231305 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:51:47.231985 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:51:47.232419 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:51:47.233959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:51:47.249290 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:51:47.249364 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:51:47.252585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:51:47.261783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:51:47.271945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:51:47.272713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:47.276004 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:51:47.280198 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Jul 6 23:51:47.280223 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Jul 6 23:51:47.291056 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:51:47.292295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:47.295126 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:51:47.306784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:51:47.311318 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:51:47.314511 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:51:47.316601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:51:47.344395 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:51:47.370607 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:51:47.372678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:47.374696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:47.376077 systemd-journald[1099]: Time spent on flushing to /var/log/journal/1b899485577a48e5b6d3b83f7f44d690 is 96.270ms for 994 entries. Jul 6 23:51:47.376077 systemd-journald[1099]: System Journal (/var/log/journal/1b899485577a48e5b6d3b83f7f44d690) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:51:47.484585 systemd-journald[1099]: Received client request to flush runtime journal. Jul 6 23:51:47.484650 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:51:47.484668 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:51:47.378379 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:51:47.390924 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:51:47.399801 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:51:47.468010 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:51:47.469063 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:51:47.476264 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:51:47.487212 kernel: loop1: detected capacity change from 0 to 224512 Jul 6 23:51:47.491701 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:51:47.517863 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:51:47.524818 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:51:47.534560 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:51:47.587614 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 6 23:51:47.587642 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 6 23:51:47.595714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:47.602678 kernel: loop3: detected capacity change from 0 to 8 Jul 6 23:51:47.627600 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:51:47.653981 kernel: loop5: detected capacity change from 0 to 224512 Jul 6 23:51:47.673747 kernel: loop6: detected capacity change from 0 to 140768 Jul 6 23:51:47.700569 kernel: loop7: detected capacity change from 0 to 8 Jul 6 23:51:47.703388 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 6 23:51:47.703970 (sd-merge)[1178]: Merged extensions into '/usr'. Jul 6 23:51:47.716741 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:51:47.716766 systemd[1]: Reloading... Jul 6 23:51:47.890581 zram_generator::config[1203]: No configuration found. Jul 6 23:51:47.982806 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:51:48.078854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:51:48.135486 systemd[1]: Reloading finished in 417 ms. Jul 6 23:51:48.180354 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:51:48.182104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:51:48.195972 systemd[1]: Starting ensure-sysext.service... Jul 6 23:51:48.201159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:51:48.218380 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:51:48.218403 systemd[1]: Reloading... Jul 6 23:51:48.245024 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:51:48.245488 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:51:48.246427 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:51:48.247043 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:51:48.247268 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:51:48.251212 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:51:48.251227 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:51:48.263411 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:51:48.263426 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:51:48.332683 zram_generator::config[1275]: No configuration found. Jul 6 23:51:48.464936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:51:48.518084 systemd[1]: Reloading finished in 298 ms. Jul 6 23:51:48.533888 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:51:48.540171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:48.548518 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:48.559857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:51:48.565567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:51:48.574810 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:51:48.580826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:48.588760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:51:48.594789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.595059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:48.602953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:48.604874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:48.609471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:48.610040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:48.621874 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:51:48.622263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.625525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.625753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:48.625929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:48.626017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.629591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.629836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:48.633410 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:51:48.634322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:48.634484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.639718 systemd[1]: Finished ensure-sysext.service. Jul 6 23:51:48.651427 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:51:48.662698 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:51:48.669619 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:51:48.682180 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:51:48.683765 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:51:48.684472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:48.686264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:48.687116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:48.687642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:48.693451 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:48.693591 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:51:48.704011 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:48.704219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:48.705294 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:51:48.705830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:51:48.706985 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jul 6 23:51:48.707496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:48.721005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:51:48.726917 augenrules[1357]: No rules Jul 6 23:51:48.729564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:48.740993 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:51:48.746759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:48.758307 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:51:48.894775 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:51:48.895266 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:51:48.903365 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:51:48.914474 systemd-networkd[1371]: lo: Link UP Jul 6 23:51:48.919639 systemd-networkd[1371]: lo: Gained carrier Jul 6 23:51:48.920723 systemd-networkd[1371]: Enumeration completed Jul 6 23:51:48.920844 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:51:48.927839 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:51:48.929591 systemd-resolved[1325]: Positive Trust Anchors: Jul 6 23:51:48.931086 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:51:48.931139 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:51:48.941777 systemd-resolved[1325]: Using system hostname 'ci-4081.3.4-c-43d64a8ca6'. Jul 6 23:51:48.955105 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 6 23:51:48.956191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.956385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:48.963820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:48.967060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:48.970728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:48.971216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:48.971261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:51:48.971280 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:48.971455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:51:48.971954 systemd[1]: Reached target network.target - Network. Jul 6 23:51:48.972662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:48.986044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:48.986220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:48.997612 kernel: ISO 9660 Extensions: RRIP_1991A Jul 6 23:51:49.001468 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 6 23:51:49.010032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:49.010219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:49.012384 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:49.013612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:49.015345 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-06:5c:23:22:54:be.network. Jul 6 23:51:49.017719 systemd-networkd[1371]: eth1: Link UP Jul 6 23:51:49.017861 systemd-networkd[1371]: eth1: Gained carrier Jul 6 23:51:49.021946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:49.022003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:49.022683 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:49.032582 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1368) Jul 6 23:51:49.057815 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-c2:29:66:dd:df:5d.network. Jul 6 23:51:49.059847 systemd-networkd[1371]: eth0: Link UP Jul 6 23:51:49.059854 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:49.060069 systemd-networkd[1371]: eth0: Gained carrier Jul 6 23:51:49.063960 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:49.064996 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:49.073592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:51:49.088628 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:51:49.095587 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 6 23:51:49.134613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:51:49.205058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:51:49.216777 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:51:49.227563 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:51:49.233807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:49.292570 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 6 23:51:49.292655 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 6 23:51:49.299701 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:51:49.299785 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:51:49.299802 kernel: [drm] features: -context_init Jul 6 23:51:49.301845 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:51:49.303712 kernel: [drm] number of scanouts: 1 Jul 6 23:51:49.303795 kernel: [drm] number of cap sets: 0 Jul 6 23:51:49.307567 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 6 23:51:49.316668 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:51:49.316743 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:51:49.325672 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:51:49.329452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:49.331749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:49.340362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:49.356598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:49.356865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:49.376802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:49.381566 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:51:49.417342 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:51:49.425997 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:51:49.452248 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:51:49.455709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:49.478362 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:51:49.479792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:49.479926 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:51:49.480170 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:51:49.480279 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:51:49.480598 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:51:49.480792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:51:49.480898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:51:49.481020 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:51:49.481067 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:51:49.481322 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:51:49.483313 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:51:49.485387 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:51:49.491816 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:51:49.494422 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:51:49.498183 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:51:49.500757 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:51:49.501755 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:51:49.502499 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:51:49.502530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:51:49.511749 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:51:49.516687 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:51:49.521805 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:51:49.527700 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:51:49.530403 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:51:49.539812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:51:49.540397 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:51:49.550778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:51:49.564581 jq[1436]: false Jul 6 23:51:49.560142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:51:49.568880 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:51:49.582791 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:51:49.594056 coreos-metadata[1434]: Jul 06 23:51:49.593 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:49.594923 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:51:49.603987 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:51:49.605598 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:51:49.609019 coreos-metadata[1434]: Jul 06 23:51:49.607 INFO Fetch successful Jul 6 23:51:49.610057 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:51:49.613253 extend-filesystems[1437]: Found loop4 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found loop5 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found loop6 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found loop7 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda1 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda2 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda3 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found usr Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda4 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda6 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda7 Jul 6 23:51:49.616219 extend-filesystems[1437]: Found vda9 Jul 6 23:51:49.616219 extend-filesystems[1437]: Checking size of /dev/vda9 Jul 6 23:51:49.615734 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:51:49.619425 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:51:49.632118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:51:49.632412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:51:49.635945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:51:49.636164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:51:49.676563 extend-filesystems[1437]: Resized partition /dev/vda9 Jul 6 23:51:49.700453 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:51:49.693126 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:51:49.728764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 6 23:51:49.721748 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:51:49.724028 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:51:49.729369 jq[1451]: true Jul 6 23:51:49.749500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1382) Jul 6 23:51:49.742675 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:51:49.732505 dbus-daemon[1435]: [system] SELinux support is enabled Jul 6 23:51:49.759787 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:51:49.759859 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:51:49.762070 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:51:49.762158 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 6 23:51:49.762182 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:51:49.767861 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:51:49.769934 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:51:49.813131 update_engine[1450]: I20250706 23:51:49.806526 1450 main.cc:92] Flatcar Update Engine starting Jul 6 23:51:49.826491 jq[1474]: true Jul 6 23:51:49.826695 tar[1456]: linux-amd64/LICENSE Jul 6 23:51:49.826695 tar[1456]: linux-amd64/helm Jul 6 23:51:49.826882 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:51:49.831130 update_engine[1450]: I20250706 23:51:49.830307 1450 update_check_scheduler.cc:74] Next update check in 4m10s Jul 6 23:51:49.836369 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:51:49.890634 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:51:49.903321 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:51:49.907594 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 6 23:51:49.927950 systemd[1]: Starting sshkeys.service... Jul 6 23:51:49.932692 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:51:49.932692 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 6 23:51:49.932692 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 6 23:51:49.949721 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jul 6 23:51:49.949721 extend-filesystems[1437]: Found vdb Jul 6 23:51:49.934019 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:51:49.934585 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:51:49.989018 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:51:50.002262 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:51:50.028431 systemd-logind[1445]: New seat seat0. Jul 6 23:51:50.035327 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:51:50.035363 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:51:50.036688 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:51:50.108631 coreos-metadata[1502]: Jul 06 23:51:50.108 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:50.134558 coreos-metadata[1502]: Jul 06 23:51:50.130 INFO Fetch successful Jul 6 23:51:50.157962 unknown[1502]: wrote ssh authorized keys file for user: core Jul 6 23:51:50.196414 update-ssh-keys[1514]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:51:50.198756 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:51:50.205556 systemd[1]: Finished sshkeys.service. Jul 6 23:51:50.230731 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:51:50.261476 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:51:50.335224 containerd[1471]: time="2025-07-06T23:51:50.335056662Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:51:50.336129 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:51:50.348920 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:51:50.360946 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:51:50.361205 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:51:50.370983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:51:50.389584 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:51:50.399000 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:51:50.410975 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:51:50.412646 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:51:50.419513 containerd[1471]: time="2025-07-06T23:51:50.418034502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.420137 containerd[1471]: time="2025-07-06T23:51:50.420072018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:50.420295 containerd[1471]: time="2025-07-06T23:51:50.420270584Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:51:50.420440 containerd[1471]: time="2025-07-06T23:51:50.420419657Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:51:50.420894 containerd[1471]: time="2025-07-06T23:51:50.420826328Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:51:50.420984 containerd[1471]: time="2025-07-06T23:51:50.420971559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.421170 containerd[1471]: time="2025-07-06T23:51:50.421150157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:50.423948 containerd[1471]: time="2025-07-06T23:51:50.423595203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.425742 containerd[1471]: time="2025-07-06T23:51:50.425690292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:50.426774 containerd[1471]: time="2025-07-06T23:51:50.425864390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.426774 containerd[1471]: time="2025-07-06T23:51:50.425896624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:50.426774 containerd[1471]: time="2025-07-06T23:51:50.425912479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.426774 containerd[1471]: time="2025-07-06T23:51:50.426130445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.426774 containerd[1471]: time="2025-07-06T23:51:50.426584707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:50.427111 containerd[1471]: time="2025-07-06T23:51:50.427082660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:50.427192 containerd[1471]: time="2025-07-06T23:51:50.427174892Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:51:50.427463 containerd[1471]: time="2025-07-06T23:51:50.427431419Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:51:50.430580 containerd[1471]: time="2025-07-06T23:51:50.430390285Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:51:50.434397 containerd[1471]: time="2025-07-06T23:51:50.434333611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:51:50.434545 containerd[1471]: time="2025-07-06T23:51:50.434424081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:51:50.434545 containerd[1471]: time="2025-07-06T23:51:50.434448810Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:51:50.434545 containerd[1471]: time="2025-07-06T23:51:50.434471095Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:51:50.434545 containerd[1471]: time="2025-07-06T23:51:50.434492607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:51:50.434838 containerd[1471]: time="2025-07-06T23:51:50.434808935Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.435892824Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436143882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436188375Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436265320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436301348Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436329570Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436357147Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436384634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436408800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436435137Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436460357Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.436820 containerd[1471]: time="2025-07-06T23:51:50.436486236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:51:50.437449 containerd[1471]: time="2025-07-06T23:51:50.436524071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437449 containerd[1471]: time="2025-07-06T23:51:50.437332595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437449 containerd[1471]: time="2025-07-06T23:51:50.437382898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437590 containerd[1471]: time="2025-07-06T23:51:50.437467041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437590 containerd[1471]: time="2025-07-06T23:51:50.437493342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437590 containerd[1471]: time="2025-07-06T23:51:50.437557749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437590 containerd[1471]: time="2025-07-06T23:51:50.437583209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437720 containerd[1471]: time="2025-07-06T23:51:50.437608558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437720 containerd[1471]: time="2025-07-06T23:51:50.437646460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437800 containerd[1471]: time="2025-07-06T23:51:50.437718040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437800 containerd[1471]: time="2025-07-06T23:51:50.437768088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437871 containerd[1471]: time="2025-07-06T23:51:50.437804100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437871 containerd[1471]: time="2025-07-06T23:51:50.437831807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.437941 containerd[1471]: time="2025-07-06T23:51:50.437874329Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:51:50.437941 containerd[1471]: time="2025-07-06T23:51:50.437916080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.438017 containerd[1471]: time="2025-07-06T23:51:50.437971545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.438017 containerd[1471]: time="2025-07-06T23:51:50.437998095Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441071588Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441163248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441186853Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441208004Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441224613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441249469Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441266738Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:51:50.441542 containerd[1471]: time="2025-07-06T23:51:50.441282087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:51:50.441959 containerd[1471]: time="2025-07-06T23:51:50.441796162Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:51:50.441959 containerd[1471]: time="2025-07-06T23:51:50.441888818Z" level=info msg="Connect containerd service" Jul 6 23:51:50.441959 containerd[1471]: time="2025-07-06T23:51:50.441950211Z" level=info msg="using legacy CRI server" Jul 6 23:51:50.441959 containerd[1471]: time="2025-07-06T23:51:50.441961975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:51:50.442272 containerd[1471]: time="2025-07-06T23:51:50.442133858Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:51:50.444319 containerd[1471]: time="2025-07-06T23:51:50.444222269Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444693958Z" level=info msg="Start subscribing containerd event" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444780668Z" level=info msg="Start recovering state" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444896915Z" level=info msg="Start event monitor" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444932781Z" level=info msg="Start snapshots syncer" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444950381Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.444962590Z" level=info msg="Start streaming server" Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.445256744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:51:50.447072 containerd[1471]: time="2025-07-06T23:51:50.445340392Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:51:50.445577 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:51:50.447839 containerd[1471]: time="2025-07-06T23:51:50.447802590Z" level=info msg="containerd successfully booted in 0.116594s" Jul 6 23:51:50.718602 tar[1456]: linux-amd64/README.md Jul 6 23:51:50.733975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:51:50.922718 systemd-networkd[1371]: eth0: Gained IPv6LL Jul 6 23:51:50.923191 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:50.927325 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:51:50.929492 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:51:50.944269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:51:50.949504 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:51:50.976965 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:51:50.987232 systemd-networkd[1371]: eth1: Gained IPv6LL Jul 6 23:51:50.987854 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:51.958403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:51:51.959724 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:51:51.963224 systemd[1]: Startup finished in 1.002s (kernel) + 5.568s (initrd) + 5.734s (userspace) = 12.305s. Jul 6 23:51:51.978093 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:51:52.562650 kubelet[1557]: E0706 23:51:52.562523 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:51:52.565377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:51:52.565553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:51:52.565896 systemd[1]: kubelet.service: Consumed 1.227s CPU time. Jul 6 23:51:55.069520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:51:55.079879 systemd[1]: Started sshd@0-209.38.68.255:22-139.178.89.65:34926.service - OpenSSH per-connection server daemon (139.178.89.65:34926). Jul 6 23:51:55.144587 sshd[1569]: Accepted publickey for core from 139.178.89.65 port 34926 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:55.147264 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:55.164014 systemd-logind[1445]: New session 1 of user core. Jul 6 23:51:55.165467 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:51:55.178143 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:51:55.197073 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:51:55.203882 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:51:55.219813 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:51:55.333172 systemd[1573]: Queued start job for default target default.target. Jul 6 23:51:55.343916 systemd[1573]: Created slice app.slice - User Application Slice. Jul 6 23:51:55.343953 systemd[1573]: Reached target paths.target - Paths. Jul 6 23:51:55.343968 systemd[1573]: Reached target timers.target - Timers. Jul 6 23:51:55.346029 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:51:55.363842 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:51:55.363978 systemd[1573]: Reached target sockets.target - Sockets. Jul 6 23:51:55.363993 systemd[1573]: Reached target basic.target - Basic System. Jul 6 23:51:55.364044 systemd[1573]: Reached target default.target - Main User Target. Jul 6 23:51:55.364077 systemd[1573]: Startup finished in 135ms. Jul 6 23:51:55.364238 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:51:55.372819 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:51:55.447877 systemd[1]: Started sshd@1-209.38.68.255:22-139.178.89.65:34940.service - OpenSSH per-connection server daemon (139.178.89.65:34940). Jul 6 23:51:55.507457 sshd[1584]: Accepted publickey for core from 139.178.89.65 port 34940 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:55.509907 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:55.515983 systemd-logind[1445]: New session 2 of user core. Jul 6 23:51:55.525895 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:51:55.589927 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:55.600233 systemd[1]: sshd@1-209.38.68.255:22-139.178.89.65:34940.service: Deactivated successfully. Jul 6 23:51:55.602358 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:51:55.603097 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:51:55.617062 systemd[1]: Started sshd@2-209.38.68.255:22-139.178.89.65:34952.service - OpenSSH per-connection server daemon (139.178.89.65:34952). Jul 6 23:51:55.618648 systemd-logind[1445]: Removed session 2. Jul 6 23:51:55.655020 sshd[1591]: Accepted publickey for core from 139.178.89.65 port 34952 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:55.656864 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:55.663611 systemd-logind[1445]: New session 3 of user core. Jul 6 23:51:55.668815 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:51:55.726830 sshd[1591]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:55.738670 systemd[1]: sshd@2-209.38.68.255:22-139.178.89.65:34952.service: Deactivated successfully. Jul 6 23:51:55.740455 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:51:55.742746 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:51:55.746937 systemd[1]: Started sshd@3-209.38.68.255:22-139.178.89.65:34954.service - OpenSSH per-connection server daemon (139.178.89.65:34954). Jul 6 23:51:55.748680 systemd-logind[1445]: Removed session 3. Jul 6 23:51:55.786615 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 34954 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:55.788511 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:55.793605 systemd-logind[1445]: New session 4 of user core. Jul 6 23:51:55.805795 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:51:55.866229 sshd[1598]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:55.880034 systemd[1]: sshd@3-209.38.68.255:22-139.178.89.65:34954.service: Deactivated successfully. Jul 6 23:51:55.882120 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:51:55.884888 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:51:55.889209 systemd[1]: Started sshd@4-209.38.68.255:22-139.178.89.65:34966.service - OpenSSH per-connection server daemon (139.178.89.65:34966). Jul 6 23:51:55.891639 systemd-logind[1445]: Removed session 4. Jul 6 23:51:55.930679 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 34966 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:55.932887 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:55.938658 systemd-logind[1445]: New session 5 of user core. Jul 6 23:51:55.947842 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:51:56.017278 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:51:56.017701 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:56.029625 sudo[1608]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:56.033605 sshd[1605]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:56.043850 systemd[1]: sshd@4-209.38.68.255:22-139.178.89.65:34966.service: Deactivated successfully. Jul 6 23:51:56.046980 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:51:56.050643 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:51:56.057086 systemd[1]: Started sshd@5-209.38.68.255:22-139.178.89.65:34976.service - OpenSSH per-connection server daemon (139.178.89.65:34976). Jul 6 23:51:56.060991 systemd-logind[1445]: Removed session 5. Jul 6 23:51:56.096610 sshd[1613]: Accepted publickey for core from 139.178.89.65 port 34976 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:56.098477 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:56.104418 systemd-logind[1445]: New session 6 of user core. Jul 6 23:51:56.110865 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:51:56.171489 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:51:56.172458 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:56.177002 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:56.183584 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:51:56.183927 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:56.200899 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:56.211611 auditctl[1620]: No rules Jul 6 23:51:56.212055 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:51:56.212247 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:56.220362 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:56.249642 augenrules[1638]: No rules Jul 6 23:51:56.251462 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:56.252603 sudo[1616]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:56.256102 sshd[1613]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:56.270361 systemd[1]: sshd@5-209.38.68.255:22-139.178.89.65:34976.service: Deactivated successfully. Jul 6 23:51:56.272199 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:51:56.273011 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:51:56.279990 systemd[1]: Started sshd@6-209.38.68.255:22-139.178.89.65:34982.service - OpenSSH per-connection server daemon (139.178.89.65:34982). Jul 6 23:51:56.281996 systemd-logind[1445]: Removed session 6. Jul 6 23:51:56.322619 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 34982 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:56.324354 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:56.329845 systemd-logind[1445]: New session 7 of user core. Jul 6 23:51:56.337787 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:51:56.400088 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:51:56.400442 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:56.826917 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:51:56.839044 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:51:57.266056 dockerd[1666]: time="2025-07-06T23:51:57.265849740Z" level=info msg="Starting up" Jul 6 23:51:57.390114 systemd[1]: var-lib-docker-metacopy\x2dcheck3056297204-merged.mount: Deactivated successfully. Jul 6 23:51:57.402461 dockerd[1666]: time="2025-07-06T23:51:57.402414641Z" level=info msg="Loading containers: start." Jul 6 23:51:57.536913 kernel: Initializing XFRM netlink socket Jul 6 23:51:57.569453 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:57.569755 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:57.585918 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:57.637689 systemd-networkd[1371]: docker0: Link UP Jul 6 23:51:57.638118 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:51:57.656593 dockerd[1666]: time="2025-07-06T23:51:57.656528382Z" level=info msg="Loading containers: done." Jul 6 23:51:57.674720 dockerd[1666]: time="2025-07-06T23:51:57.674346765Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:51:57.674969 dockerd[1666]: time="2025-07-06T23:51:57.674841877Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:51:57.676572 dockerd[1666]: time="2025-07-06T23:51:57.675027617Z" level=info msg="Daemon has completed initialization" Jul 6 23:51:57.711676 dockerd[1666]: time="2025-07-06T23:51:57.711581924Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:51:57.711855 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:51:58.587619 containerd[1471]: time="2025-07-06T23:51:58.587317871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:51:59.151293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839979245.mount: Deactivated successfully. Jul 6 23:52:00.471915 containerd[1471]: time="2025-07-06T23:52:00.471840206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:00.473901 containerd[1471]: time="2025-07-06T23:52:00.473683940Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 6 23:52:00.473901 containerd[1471]: time="2025-07-06T23:52:00.473831972Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:00.478390 containerd[1471]: time="2025-07-06T23:52:00.478294582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:00.482444 containerd[1471]: time="2025-07-06T23:52:00.482171592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.894805036s" Jul 6 23:52:00.482444 containerd[1471]: time="2025-07-06T23:52:00.482241208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:52:00.483569 containerd[1471]: time="2025-07-06T23:52:00.483426384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:52:01.908574 containerd[1471]: time="2025-07-06T23:52:01.907923459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.909358 containerd[1471]: time="2025-07-06T23:52:01.909290126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 6 23:52:01.910120 containerd[1471]: time="2025-07-06T23:52:01.909790431Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.913589 containerd[1471]: time="2025-07-06T23:52:01.912844883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.914285 containerd[1471]: time="2025-07-06T23:52:01.914093515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.430555484s" Jul 6 23:52:01.914285 containerd[1471]: time="2025-07-06T23:52:01.914132809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:52:01.914792 containerd[1471]: time="2025-07-06T23:52:01.914766288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:52:02.796820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:52:02.812861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:03.016891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:03.017779 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:52:03.084617 kubelet[1884]: E0706 23:52:03.084405 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:52:03.088949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:52:03.089134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:52:03.205335 containerd[1471]: time="2025-07-06T23:52:03.205277126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:03.206622 containerd[1471]: time="2025-07-06T23:52:03.206578015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 6 23:52:03.206756 containerd[1471]: time="2025-07-06T23:52:03.206691951Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:03.210530 containerd[1471]: time="2025-07-06T23:52:03.210478208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:03.211782 containerd[1471]: time="2025-07-06T23:52:03.211742737Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.296942196s" Jul 6 23:52:03.211782 containerd[1471]: time="2025-07-06T23:52:03.211781970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:52:03.212909 containerd[1471]: time="2025-07-06T23:52:03.212259278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:52:04.335093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807442710.mount: Deactivated successfully. Jul 6 23:52:04.842981 containerd[1471]: time="2025-07-06T23:52:04.842880446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:04.844144 containerd[1471]: time="2025-07-06T23:52:04.844061044Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 6 23:52:04.846139 containerd[1471]: time="2025-07-06T23:52:04.844713687Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:04.847231 containerd[1471]: time="2025-07-06T23:52:04.847174713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:04.848123 containerd[1471]: time="2025-07-06T23:52:04.848083656Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.635786959s" Jul 6 23:52:04.848250 containerd[1471]: time="2025-07-06T23:52:04.848234216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:52:04.849065 containerd[1471]: time="2025-07-06T23:52:04.849028624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:52:04.850593 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 6 23:52:05.352849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925851392.mount: Deactivated successfully. Jul 6 23:52:06.184802 containerd[1471]: time="2025-07-06T23:52:06.184733084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.186498 containerd[1471]: time="2025-07-06T23:52:06.186045338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:52:06.187123 containerd[1471]: time="2025-07-06T23:52:06.187078881Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.192821 containerd[1471]: time="2025-07-06T23:52:06.192758611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.194686 containerd[1471]: time="2025-07-06T23:52:06.194625479Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.34554686s" Jul 6 23:52:06.194911 containerd[1471]: time="2025-07-06T23:52:06.194884994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:52:06.195585 containerd[1471]: time="2025-07-06T23:52:06.195478860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:52:06.685034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232823293.mount: Deactivated successfully. Jul 6 23:52:06.690648 containerd[1471]: time="2025-07-06T23:52:06.689827478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.691459 containerd[1471]: time="2025-07-06T23:52:06.691265203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:52:06.692113 containerd[1471]: time="2025-07-06T23:52:06.692073745Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.695500 containerd[1471]: time="2025-07-06T23:52:06.695438903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:06.697615 containerd[1471]: time="2025-07-06T23:52:06.697569242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.016305ms" Jul 6 23:52:06.697730 containerd[1471]: time="2025-07-06T23:52:06.697622532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:52:06.698321 containerd[1471]: time="2025-07-06T23:52:06.698138084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:52:07.255526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476490140.mount: Deactivated successfully. Jul 6 23:52:07.946752 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 6 23:52:08.858242 containerd[1471]: time="2025-07-06T23:52:08.858172080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:08.859616 containerd[1471]: time="2025-07-06T23:52:08.859556924Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 6 23:52:08.860295 containerd[1471]: time="2025-07-06T23:52:08.860061807Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:08.867570 containerd[1471]: time="2025-07-06T23:52:08.865954546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:08.871514 containerd[1471]: time="2025-07-06T23:52:08.871448733Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.173273319s" Jul 6 23:52:08.871788 containerd[1471]: time="2025-07-06T23:52:08.871753892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:52:11.215880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:11.236020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:11.282071 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... Jul 6 23:52:11.282102 systemd[1]: Reloading... Jul 6 23:52:11.404479 zram_generator::config[2075]: No configuration found. Jul 6 23:52:11.544760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:52:11.640438 systemd[1]: Reloading finished in 357 ms. Jul 6 23:52:11.689905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:52:11.689995 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:52:11.690479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:11.699032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:11.824751 systemd[1]: Started sshd@7-209.38.68.255:22-80.94.95.116:54822.service - OpenSSH per-connection server daemon (80.94.95.116:54822). Jul 6 23:52:11.832814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:11.833496 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:52:11.891574 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:11.892025 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:52:11.892073 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:11.892254 kubelet[2130]: I0706 23:52:11.892213 2130 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:52:12.243814 kubelet[2130]: I0706 23:52:12.243657 2130 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:52:12.243814 kubelet[2130]: I0706 23:52:12.243711 2130 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:52:12.244979 kubelet[2130]: I0706 23:52:12.244895 2130 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:52:12.273125 kubelet[2130]: I0706 23:52:12.273019 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:52:12.277934 kubelet[2130]: E0706 23:52:12.277736 2130 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://209.38.68.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:12.281636 kubelet[2130]: E0706 23:52:12.280945 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:52:12.281636 kubelet[2130]: I0706 23:52:12.280977 2130 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:52:12.284486 kubelet[2130]: I0706 23:52:12.284452 2130 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:52:12.284814 kubelet[2130]: I0706 23:52:12.284774 2130 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:52:12.284995 kubelet[2130]: I0706 23:52:12.284811 2130 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-c-43d64a8ca6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:52:12.284995 kubelet[2130]: I0706 23:52:12.284999 2130 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:52:12.285340 kubelet[2130]: I0706 23:52:12.285009 2130 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:52:12.286331 kubelet[2130]: I0706 23:52:12.286279 2130 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:12.292469 kubelet[2130]: I0706 23:52:12.292012 2130 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:52:12.292469 kubelet[2130]: I0706 23:52:12.292081 2130 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:52:12.292469 kubelet[2130]: I0706 23:52:12.292115 2130 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:52:12.292469 kubelet[2130]: I0706 23:52:12.292128 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:52:12.299250 kubelet[2130]: I0706 23:52:12.298958 2130 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:52:12.302940 kubelet[2130]: I0706 23:52:12.302746 2130 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:52:12.302940 kubelet[2130]: W0706 23:52:12.302836 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:52:12.304566 kubelet[2130]: I0706 23:52:12.303421 2130 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:52:12.304566 kubelet[2130]: I0706 23:52:12.303457 2130 server.go:1287] "Started kubelet" Jul 6 23:52:12.304566 kubelet[2130]: W0706 23:52:12.303631 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.68.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-c-43d64a8ca6&limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:12.304566 kubelet[2130]: E0706 23:52:12.303704 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.68.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-c-43d64a8ca6&limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:12.318985 kubelet[2130]: W0706 23:52:12.318927 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.68.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:12.319200 kubelet[2130]: I0706 23:52:12.319163 2130 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:52:12.319289 kubelet[2130]: E0706 23:52:12.319266 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.68.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:12.319387 kubelet[2130]: I0706 23:52:12.319143 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:52:12.320268 kubelet[2130]: I0706 23:52:12.320241 2130 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:52:12.321367 kubelet[2130]: I0706 23:52:12.321241 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:52:12.322254 kubelet[2130]: I0706 23:52:12.321528 2130 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:52:12.322254 kubelet[2130]: E0706 23:52:12.321813 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" Jul 6 23:52:12.327811 kubelet[2130]: I0706 23:52:12.326554 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:52:12.327811 kubelet[2130]: I0706 23:52:12.326864 2130 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:52:12.331086 kubelet[2130]: E0706 23:52:12.327810 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.68.255:6443/api/v1/namespaces/default/events\": dial tcp 209.38.68.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-c-43d64a8ca6.184fce980028d069 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-c-43d64a8ca6,UID:ci-4081.3.4-c-43d64a8ca6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-c-43d64a8ca6,},FirstTimestamp:2025-07-06 23:52:12.303437929 +0000 UTC m=+0.461182401,LastTimestamp:2025-07-06 23:52:12.303437929 +0000 UTC m=+0.461182401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-c-43d64a8ca6,}" Jul 6 23:52:12.331671 kubelet[2130]: I0706 23:52:12.331503 2130 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:52:12.331918 kubelet[2130]: I0706 23:52:12.331665 2130 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:52:12.333560 kubelet[2130]: I0706 23:52:12.333429 2130 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:52:12.333560 kubelet[2130]: I0706 23:52:12.333493 2130 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:52:12.337793 kubelet[2130]: E0706 23:52:12.337757 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.68.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-c-43d64a8ca6?timeout=10s\": dial tcp 209.38.68.255:6443: connect: connection refused" interval="200ms" Jul 6 23:52:12.339581 kubelet[2130]: W0706 23:52:12.339153 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.68.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:12.339581 kubelet[2130]: E0706 23:52:12.339215 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.68.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:12.340647 kubelet[2130]: I0706 23:52:12.340479 2130 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:52:12.349607 kubelet[2130]: I0706 23:52:12.349494 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:52:12.352861 kubelet[2130]: I0706 23:52:12.352810 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:52:12.352861 kubelet[2130]: I0706 23:52:12.352845 2130 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:52:12.352861 kubelet[2130]: I0706 23:52:12.352870 2130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:52:12.353054 kubelet[2130]: I0706 23:52:12.352878 2130 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:52:12.353054 kubelet[2130]: E0706 23:52:12.352938 2130 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:52:12.362491 kubelet[2130]: W0706 23:52:12.362168 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.68.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:12.362491 kubelet[2130]: E0706 23:52:12.362229 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.68.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:12.372329 kubelet[2130]: E0706 23:52:12.372216 2130 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:52:12.377622 kubelet[2130]: I0706 23:52:12.377408 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:52:12.377622 kubelet[2130]: I0706 23:52:12.377432 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:52:12.377622 kubelet[2130]: I0706 23:52:12.377460 2130 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:12.380150 kubelet[2130]: I0706 23:52:12.379857 2130 policy_none.go:49] "None policy: Start" Jul 6 23:52:12.380150 kubelet[2130]: I0706 23:52:12.379890 2130 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:52:12.380150 kubelet[2130]: I0706 23:52:12.379903 2130 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:52:12.385977 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:52:12.399530 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:52:12.403255 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:52:12.414036 kubelet[2130]: I0706 23:52:12.414002 2130 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:52:12.414912 kubelet[2130]: I0706 23:52:12.414887 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:52:12.415147 kubelet[2130]: I0706 23:52:12.414976 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:52:12.416066 kubelet[2130]: I0706 23:52:12.416040 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:52:12.417763 kubelet[2130]: E0706 23:52:12.417725 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:52:12.417952 kubelet[2130]: E0706 23:52:12.417779 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-c-43d64a8ca6\" not found" Jul 6 23:52:12.463970 systemd[1]: Created slice kubepods-burstable-podee33007c834a2d3cd1bd4d31e180a37f.slice - libcontainer container kubepods-burstable-podee33007c834a2d3cd1bd4d31e180a37f.slice. Jul 6 23:52:12.484571 kubelet[2130]: E0706 23:52:12.484522 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.488318 systemd[1]: Created slice kubepods-burstable-pod667868d6301248ee1d6eb7ceca360162.slice - libcontainer container kubepods-burstable-pod667868d6301248ee1d6eb7ceca360162.slice. Jul 6 23:52:12.508008 kubelet[2130]: E0706 23:52:12.506314 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.510898 systemd[1]: Created slice kubepods-burstable-pod2a50f07796c27c3275e329d7da63ffb6.slice - libcontainer container kubepods-burstable-pod2a50f07796c27c3275e329d7da63ffb6.slice. Jul 6 23:52:12.513010 kubelet[2130]: E0706 23:52:12.512811 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.517049 kubelet[2130]: I0706 23:52:12.517005 2130 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.517564 kubelet[2130]: E0706 23:52:12.517516 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.68.255:6443/api/v1/nodes\": dial tcp 209.38.68.255:6443: connect: connection refused" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535415 kubelet[2130]: I0706 23:52:12.535338 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535415 kubelet[2130]: I0706 23:52:12.535384 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535415 kubelet[2130]: I0706 23:52:12.535405 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535415 kubelet[2130]: I0706 23:52:12.535422 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535415 kubelet[2130]: I0706 23:52:12.535441 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535718 kubelet[2130]: I0706 23:52:12.535460 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535718 kubelet[2130]: I0706 23:52:12.535488 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535718 kubelet[2130]: I0706 23:52:12.535512 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.535718 kubelet[2130]: I0706 23:52:12.535529 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a50f07796c27c3275e329d7da63ffb6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-c-43d64a8ca6\" (UID: \"2a50f07796c27c3275e329d7da63ffb6\") " pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.538942 kubelet[2130]: E0706 23:52:12.538862 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.68.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-c-43d64a8ca6?timeout=10s\": dial tcp 209.38.68.255:6443: connect: connection refused" interval="400ms" Jul 6 23:52:12.718905 kubelet[2130]: I0706 23:52:12.718857 2130 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.719307 kubelet[2130]: E0706 23:52:12.719259 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.68.255:6443/api/v1/nodes\": dial tcp 209.38.68.255:6443: connect: connection refused" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:12.786088 kubelet[2130]: E0706 23:52:12.785949 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:12.789446 containerd[1471]: time="2025-07-06T23:52:12.788949426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-c-43d64a8ca6,Uid:ee33007c834a2d3cd1bd4d31e180a37f,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:12.791110 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jul 6 23:52:12.807841 kubelet[2130]: E0706 23:52:12.807783 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:12.816292 kubelet[2130]: E0706 23:52:12.814808 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:12.819355 containerd[1471]: time="2025-07-06T23:52:12.819299363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-c-43d64a8ca6,Uid:2a50f07796c27c3275e329d7da63ffb6,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:12.819925 containerd[1471]: time="2025-07-06T23:52:12.819304459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-c-43d64a8ca6,Uid:667868d6301248ee1d6eb7ceca360162,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:12.940354 kubelet[2130]: E0706 23:52:12.940291 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.68.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-c-43d64a8ca6?timeout=10s\": dial tcp 209.38.68.255:6443: connect: connection refused" interval="800ms" Jul 6 23:52:13.122053 kubelet[2130]: I0706 23:52:13.121659 2130 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:13.122053 kubelet[2130]: E0706 23:52:13.121995 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.68.255:6443/api/v1/nodes\": dial tcp 209.38.68.255:6443: connect: connection refused" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:13.319368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132427508.mount: Deactivated successfully. Jul 6 23:52:13.327574 containerd[1471]: time="2025-07-06T23:52:13.325459321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:13.327574 containerd[1471]: time="2025-07-06T23:52:13.326778652Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:13.327842 containerd[1471]: time="2025-07-06T23:52:13.327591377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:52:13.327842 containerd[1471]: time="2025-07-06T23:52:13.327632405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:52:13.328351 containerd[1471]: time="2025-07-06T23:52:13.328310113Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:13.328600 containerd[1471]: time="2025-07-06T23:52:13.328553749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:52:13.329733 containerd[1471]: time="2025-07-06T23:52:13.329698078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:13.334269 containerd[1471]: time="2025-07-06T23:52:13.334222260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:13.335185 containerd[1471]: time="2025-07-06T23:52:13.335148658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.027906ms" Jul 6 23:52:13.338940 containerd[1471]: time="2025-07-06T23:52:13.337670654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.03584ms" Jul 6 23:52:13.340700 containerd[1471]: time="2025-07-06T23:52:13.340640693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.566537ms" Jul 6 23:52:13.482835 kubelet[2130]: W0706 23:52:13.482683 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.68.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-c-43d64a8ca6&limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:13.482835 kubelet[2130]: E0706 23:52:13.482759 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.68.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-c-43d64a8ca6&limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:13.497480 containerd[1471]: time="2025-07-06T23:52:13.497286119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:13.497480 containerd[1471]: time="2025-07-06T23:52:13.497388762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:13.497480 containerd[1471]: time="2025-07-06T23:52:13.497404736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.497837 containerd[1471]: time="2025-07-06T23:52:13.497663194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.514765 containerd[1471]: time="2025-07-06T23:52:13.514468210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:13.514765 containerd[1471]: time="2025-07-06T23:52:13.514529094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:13.514765 containerd[1471]: time="2025-07-06T23:52:13.514566746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.514765 containerd[1471]: time="2025-07-06T23:52:13.514652583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.516643 kubelet[2130]: W0706 23:52:13.516567 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.68.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:13.516780 kubelet[2130]: E0706 23:52:13.516649 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.68.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:13.521681 containerd[1471]: time="2025-07-06T23:52:13.519447057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:13.521681 containerd[1471]: time="2025-07-06T23:52:13.519519137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:13.521681 containerd[1471]: time="2025-07-06T23:52:13.519544580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.521681 containerd[1471]: time="2025-07-06T23:52:13.519741606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:13.542493 systemd[1]: Started cri-containerd-ad7e553691897fb7f47c25db20ce48f59d8eda910d97952a92581386a47ddda6.scope - libcontainer container ad7e553691897fb7f47c25db20ce48f59d8eda910d97952a92581386a47ddda6. Jul 6 23:52:13.547747 systemd[1]: Started cri-containerd-6bfcf3d97e7af3a40847f38616a965fac5ed5c0313f9718f9f34e39e3b3ff4b8.scope - libcontainer container 6bfcf3d97e7af3a40847f38616a965fac5ed5c0313f9718f9f34e39e3b3ff4b8. Jul 6 23:52:13.558074 systemd[1]: Started cri-containerd-62c4326bbe8185f80fc261cff18d724fbffb8675ae03b5f26c6074da04ea236d.scope - libcontainer container 62c4326bbe8185f80fc261cff18d724fbffb8675ae03b5f26c6074da04ea236d. Jul 6 23:52:13.582339 kubelet[2130]: W0706 23:52:13.582191 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.68.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:13.582339 kubelet[2130]: E0706 23:52:13.582260 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.68.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:13.630449 containerd[1471]: time="2025-07-06T23:52:13.630293659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-c-43d64a8ca6,Uid:667868d6301248ee1d6eb7ceca360162,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bfcf3d97e7af3a40847f38616a965fac5ed5c0313f9718f9f34e39e3b3ff4b8\"" Jul 6 23:52:13.635518 kubelet[2130]: E0706 23:52:13.635397 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:13.644206 containerd[1471]: time="2025-07-06T23:52:13.644056696Z" level=info msg="CreateContainer within sandbox \"6bfcf3d97e7af3a40847f38616a965fac5ed5c0313f9718f9f34e39e3b3ff4b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:52:13.648027 containerd[1471]: time="2025-07-06T23:52:13.647993684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-c-43d64a8ca6,Uid:ee33007c834a2d3cd1bd4d31e180a37f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad7e553691897fb7f47c25db20ce48f59d8eda910d97952a92581386a47ddda6\"" Jul 6 23:52:13.651578 kubelet[2130]: E0706 23:52:13.651233 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:13.654858 containerd[1471]: time="2025-07-06T23:52:13.654694730Z" level=info msg="CreateContainer within sandbox \"ad7e553691897fb7f47c25db20ce48f59d8eda910d97952a92581386a47ddda6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:52:13.658799 containerd[1471]: time="2025-07-06T23:52:13.658754325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-c-43d64a8ca6,Uid:2a50f07796c27c3275e329d7da63ffb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"62c4326bbe8185f80fc261cff18d724fbffb8675ae03b5f26c6074da04ea236d\"" Jul 6 23:52:13.660071 kubelet[2130]: E0706 23:52:13.660037 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:13.664671 sshd[2128]: Invalid user admin from 80.94.95.116 port 54822 Jul 6 23:52:13.665942 containerd[1471]: time="2025-07-06T23:52:13.665491871Z" level=info msg="CreateContainer within sandbox \"6bfcf3d97e7af3a40847f38616a965fac5ed5c0313f9718f9f34e39e3b3ff4b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8072548270afabf183a60cda10cdf907556391eb3514f4a0d04ea915e0904a0c\"" Jul 6 23:52:13.667228 containerd[1471]: time="2025-07-06T23:52:13.666291909Z" level=info msg="CreateContainer within sandbox \"62c4326bbe8185f80fc261cff18d724fbffb8675ae03b5f26c6074da04ea236d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:52:13.668156 containerd[1471]: time="2025-07-06T23:52:13.667837446Z" level=info msg="StartContainer for \"8072548270afabf183a60cda10cdf907556391eb3514f4a0d04ea915e0904a0c\"" Jul 6 23:52:13.676923 containerd[1471]: time="2025-07-06T23:52:13.676874849Z" level=info msg="CreateContainer within sandbox \"ad7e553691897fb7f47c25db20ce48f59d8eda910d97952a92581386a47ddda6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0321559b94edda667419aeae24435280b5faa6f405d7cbe9a0d7010858786c3b\"" Jul 6 23:52:13.679169 containerd[1471]: time="2025-07-06T23:52:13.678027737Z" level=info msg="StartContainer for \"0321559b94edda667419aeae24435280b5faa6f405d7cbe9a0d7010858786c3b\"" Jul 6 23:52:13.691072 containerd[1471]: time="2025-07-06T23:52:13.691004388Z" level=info msg="CreateContainer within sandbox \"62c4326bbe8185f80fc261cff18d724fbffb8675ae03b5f26c6074da04ea236d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3721adfd797fdc7b34a9610de64b30b397055394c3fd6437c9e2c50a21a3653f\"" Jul 6 23:52:13.693011 containerd[1471]: time="2025-07-06T23:52:13.692967589Z" level=info msg="StartContainer for \"3721adfd797fdc7b34a9610de64b30b397055394c3fd6437c9e2c50a21a3653f\"" Jul 6 23:52:13.710155 systemd[1]: Started cri-containerd-8072548270afabf183a60cda10cdf907556391eb3514f4a0d04ea915e0904a0c.scope - libcontainer container 8072548270afabf183a60cda10cdf907556391eb3514f4a0d04ea915e0904a0c. Jul 6 23:52:13.729761 systemd[1]: Started cri-containerd-0321559b94edda667419aeae24435280b5faa6f405d7cbe9a0d7010858786c3b.scope - libcontainer container 0321559b94edda667419aeae24435280b5faa6f405d7cbe9a0d7010858786c3b. Jul 6 23:52:13.741443 kubelet[2130]: E0706 23:52:13.741318 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.68.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-c-43d64a8ca6?timeout=10s\": dial tcp 209.38.68.255:6443: connect: connection refused" interval="1.6s" Jul 6 23:52:13.764219 systemd[1]: Started cri-containerd-3721adfd797fdc7b34a9610de64b30b397055394c3fd6437c9e2c50a21a3653f.scope - libcontainer container 3721adfd797fdc7b34a9610de64b30b397055394c3fd6437c9e2c50a21a3653f. Jul 6 23:52:13.798784 containerd[1471]: time="2025-07-06T23:52:13.798639532Z" level=info msg="StartContainer for \"8072548270afabf183a60cda10cdf907556391eb3514f4a0d04ea915e0904a0c\" returns successfully" Jul 6 23:52:13.812317 kubelet[2130]: E0706 23:52:13.812160 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.68.255:6443/api/v1/namespaces/default/events\": dial tcp 209.38.68.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-c-43d64a8ca6.184fce980028d069 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-c-43d64a8ca6,UID:ci-4081.3.4-c-43d64a8ca6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-c-43d64a8ca6,},FirstTimestamp:2025-07-06 23:52:12.303437929 +0000 UTC m=+0.461182401,LastTimestamp:2025-07-06 23:52:12.303437929 +0000 UTC m=+0.461182401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-c-43d64a8ca6,}" Jul 6 23:52:13.823295 containerd[1471]: time="2025-07-06T23:52:13.822864113Z" level=info msg="StartContainer for \"0321559b94edda667419aeae24435280b5faa6f405d7cbe9a0d7010858786c3b\" returns successfully" Jul 6 23:52:13.848525 containerd[1471]: time="2025-07-06T23:52:13.848393040Z" level=info msg="StartContainer for \"3721adfd797fdc7b34a9610de64b30b397055394c3fd6437c9e2c50a21a3653f\" returns successfully" Jul 6 23:52:13.874081 sshd[2128]: Connection closed by invalid user admin 80.94.95.116 port 54822 [preauth] Jul 6 23:52:13.876022 systemd[1]: sshd@7-209.38.68.255:22-80.94.95.116:54822.service: Deactivated successfully. Jul 6 23:52:13.885395 kubelet[2130]: W0706 23:52:13.885309 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.68.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.68.255:6443: connect: connection refused Jul 6 23:52:13.885610 kubelet[2130]: E0706 23:52:13.885410 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.68.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.68.255:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:52:13.923992 kubelet[2130]: I0706 23:52:13.923906 2130 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:13.924263 kubelet[2130]: E0706 23:52:13.924234 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.68.255:6443/api/v1/nodes\": dial tcp 209.38.68.255:6443: connect: connection refused" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:14.386956 kubelet[2130]: E0706 23:52:14.386852 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:14.387443 kubelet[2130]: E0706 23:52:14.387023 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.391760 kubelet[2130]: E0706 23:52:14.389877 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:14.391760 kubelet[2130]: E0706 23:52:14.389999 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.391958 kubelet[2130]: E0706 23:52:14.391938 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:14.392058 kubelet[2130]: E0706 23:52:14.392045 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:15.397577 kubelet[2130]: E0706 23:52:15.396207 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.397577 kubelet[2130]: E0706 23:52:15.396332 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:15.400092 kubelet[2130]: E0706 23:52:15.399890 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.400092 kubelet[2130]: E0706 23:52:15.400026 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:15.527177 kubelet[2130]: I0706 23:52:15.526258 2130 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.702852 kubelet[2130]: E0706 23:52:15.702220 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-c-43d64a8ca6\" not found" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.799702 kubelet[2130]: I0706 23:52:15.799645 2130 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.822851 kubelet[2130]: I0706 23:52:15.822778 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.833896 kubelet[2130]: E0706 23:52:15.833812 2130 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.833896 kubelet[2130]: I0706 23:52:15.833893 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.838412 kubelet[2130]: E0706 23:52:15.838216 2130 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.838412 kubelet[2130]: I0706 23:52:15.838276 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:15.841523 kubelet[2130]: E0706 23:52:15.841467 2130 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-c-43d64a8ca6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:16.317656 kubelet[2130]: I0706 23:52:16.317521 2130 apiserver.go:52] "Watching apiserver" Jul 6 23:52:16.333908 kubelet[2130]: I0706 23:52:16.333857 2130 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:52:16.863790 kubelet[2130]: I0706 23:52:16.863727 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:16.878261 kubelet[2130]: W0706 23:52:16.877944 2130 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:16.878261 kubelet[2130]: E0706 23:52:16.878223 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:17.399381 kubelet[2130]: E0706 23:52:17.399086 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:17.803072 systemd[1]: Reloading requested from client PID 2407 ('systemctl') (unit session-7.scope)... Jul 6 23:52:17.803577 systemd[1]: Reloading... Jul 6 23:52:17.921629 zram_generator::config[2455]: No configuration found. Jul 6 23:52:18.040617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:52:18.149268 systemd[1]: Reloading finished in 344 ms. Jul 6 23:52:18.200234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:18.221347 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:52:18.221687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:18.228924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:18.363118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:18.376219 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:52:18.444671 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:18.444671 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:52:18.444671 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:18.444671 kubelet[2497]: I0706 23:52:18.444578 2497 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:52:18.452103 kubelet[2497]: I0706 23:52:18.452055 2497 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:52:18.452982 kubelet[2497]: I0706 23:52:18.452302 2497 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:52:18.452982 kubelet[2497]: I0706 23:52:18.452643 2497 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:52:18.454227 kubelet[2497]: I0706 23:52:18.454200 2497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:52:18.464101 kubelet[2497]: I0706 23:52:18.463054 2497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:52:18.469008 kubelet[2497]: E0706 23:52:18.468962 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:52:18.469414 kubelet[2497]: I0706 23:52:18.469399 2497 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:52:18.476350 kubelet[2497]: I0706 23:52:18.476307 2497 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:52:18.476617 kubelet[2497]: I0706 23:52:18.476573 2497 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:52:18.476823 kubelet[2497]: I0706 23:52:18.476617 2497 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-c-43d64a8ca6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:52:18.476917 kubelet[2497]: I0706 23:52:18.476839 2497 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:52:18.476917 kubelet[2497]: I0706 23:52:18.476850 2497 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:52:18.476991 kubelet[2497]: I0706 23:52:18.476919 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:18.477220 kubelet[2497]: I0706 23:52:18.477199 2497 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:52:18.477306 kubelet[2497]: I0706 23:52:18.477238 2497 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:52:18.477306 kubelet[2497]: I0706 23:52:18.477260 2497 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:52:18.477306 kubelet[2497]: I0706 23:52:18.477272 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:52:18.480785 kubelet[2497]: I0706 23:52:18.480741 2497 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:52:18.483521 kubelet[2497]: I0706 23:52:18.481677 2497 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:52:18.483521 kubelet[2497]: I0706 23:52:18.482245 2497 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:52:18.483521 kubelet[2497]: I0706 23:52:18.482285 2497 server.go:1287] "Started kubelet" Jul 6 23:52:18.488209 kubelet[2497]: I0706 23:52:18.488184 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:52:18.494775 kubelet[2497]: I0706 23:52:18.494733 2497 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:52:18.499818 kubelet[2497]: I0706 23:52:18.499733 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:52:18.500640 kubelet[2497]: I0706 23:52:18.500620 2497 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:52:18.507988 kubelet[2497]: I0706 23:52:18.502212 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:52:18.508316 kubelet[2497]: I0706 23:52:18.503987 2497 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:52:18.508908 kubelet[2497]: I0706 23:52:18.503998 2497 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:52:18.509020 kubelet[2497]: E0706 23:52:18.504172 2497 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-c-43d64a8ca6\" not found" Jul 6 23:52:18.509100 kubelet[2497]: I0706 23:52:18.504232 2497 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:52:18.509362 kubelet[2497]: I0706 23:52:18.509349 2497 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:52:18.515569 kubelet[2497]: I0706 23:52:18.515519 2497 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:52:18.515569 kubelet[2497]: I0706 23:52:18.515553 2497 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:52:18.515751 kubelet[2497]: I0706 23:52:18.515669 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:52:18.515931 kubelet[2497]: E0706 23:52:18.515911 2497 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:52:18.528947 kubelet[2497]: I0706 23:52:18.528822 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:52:18.533526 kubelet[2497]: I0706 23:52:18.533103 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:52:18.533526 kubelet[2497]: I0706 23:52:18.533143 2497 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:52:18.533526 kubelet[2497]: I0706 23:52:18.533164 2497 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:52:18.533526 kubelet[2497]: I0706 23:52:18.533173 2497 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:52:18.533526 kubelet[2497]: E0706 23:52:18.533235 2497 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578631 2497 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578650 2497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578677 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578858 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578871 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578896 2497 policy_none.go:49] "None policy: Start" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578910 2497 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.578922 2497 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:52:18.579628 kubelet[2497]: I0706 23:52:18.579035 2497 state_mem.go:75] "Updated machine memory state" Jul 6 23:52:18.584940 kubelet[2497]: I0706 23:52:18.584899 2497 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:52:18.585624 kubelet[2497]: I0706 23:52:18.585607 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:52:18.585824 kubelet[2497]: I0706 23:52:18.585789 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:52:18.586601 kubelet[2497]: I0706 23:52:18.586583 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:52:18.589106 kubelet[2497]: E0706 23:52:18.589077 2497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:52:18.634702 kubelet[2497]: I0706 23:52:18.634650 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.636402 kubelet[2497]: I0706 23:52:18.636352 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.637201 kubelet[2497]: I0706 23:52:18.637179 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.641183 kubelet[2497]: W0706 23:52:18.641057 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:18.643194 kubelet[2497]: W0706 23:52:18.642620 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:18.643194 kubelet[2497]: E0706 23:52:18.642703 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-c-43d64a8ca6\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.643439 kubelet[2497]: W0706 23:52:18.643426 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:18.688902 kubelet[2497]: I0706 23:52:18.688092 2497 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.702929 kubelet[2497]: I0706 23:52:18.701142 2497 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.702929 kubelet[2497]: I0706 23:52:18.702703 2497 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711161 kubelet[2497]: I0706 23:52:18.711113 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711161 kubelet[2497]: I0706 23:52:18.711160 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711372 kubelet[2497]: I0706 23:52:18.711186 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711372 kubelet[2497]: I0706 23:52:18.711212 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711372 kubelet[2497]: I0706 23:52:18.711242 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711372 kubelet[2497]: I0706 23:52:18.711259 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee33007c834a2d3cd1bd4d31e180a37f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" (UID: \"ee33007c834a2d3cd1bd4d31e180a37f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711372 kubelet[2497]: I0706 23:52:18.711297 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711509 kubelet[2497]: I0706 23:52:18.711328 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/667868d6301248ee1d6eb7ceca360162-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-c-43d64a8ca6\" (UID: \"667868d6301248ee1d6eb7ceca360162\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.711509 kubelet[2497]: I0706 23:52:18.711351 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a50f07796c27c3275e329d7da63ffb6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-c-43d64a8ca6\" (UID: \"2a50f07796c27c3275e329d7da63ffb6\") " pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:18.943179 kubelet[2497]: E0706 23:52:18.942716 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:18.943179 kubelet[2497]: E0706 23:52:18.942865 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:18.945024 kubelet[2497]: E0706 23:52:18.944841 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.478331 kubelet[2497]: I0706 23:52:19.478290 2497 apiserver.go:52] "Watching apiserver" Jul 6 23:52:19.509886 kubelet[2497]: I0706 23:52:19.509825 2497 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:52:19.559234 kubelet[2497]: I0706 23:52:19.558886 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:19.559729 kubelet[2497]: I0706 23:52:19.559531 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:19.563562 kubelet[2497]: E0706 23:52:19.563478 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.577757 kubelet[2497]: W0706 23:52:19.577713 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:19.577950 kubelet[2497]: E0706 23:52:19.577801 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-c-43d64a8ca6\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:19.578039 kubelet[2497]: E0706 23:52:19.578016 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.579616 kubelet[2497]: W0706 23:52:19.579313 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:52:19.579778 kubelet[2497]: E0706 23:52:19.579686 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-c-43d64a8ca6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" Jul 6 23:52:19.582563 kubelet[2497]: E0706 23:52:19.581617 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.629928 kubelet[2497]: I0706 23:52:19.629839 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-c-43d64a8ca6" podStartSLOduration=1.629791387 podStartE2EDuration="1.629791387s" podCreationTimestamp="2025-07-06 23:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:19.618718878 +0000 UTC m=+1.236530179" watchObservedRunningTime="2025-07-06 23:52:19.629791387 +0000 UTC m=+1.247602689" Jul 6 23:52:19.643510 kubelet[2497]: I0706 23:52:19.643434 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-c-43d64a8ca6" podStartSLOduration=1.643411231 podStartE2EDuration="1.643411231s" podCreationTimestamp="2025-07-06 23:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:19.643013714 +0000 UTC m=+1.260825017" watchObservedRunningTime="2025-07-06 23:52:19.643411231 +0000 UTC m=+1.261222533" Jul 6 23:52:19.643795 kubelet[2497]: I0706 23:52:19.643569 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-c-43d64a8ca6" podStartSLOduration=3.643560089 podStartE2EDuration="3.643560089s" podCreationTimestamp="2025-07-06 23:52:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:19.631595216 +0000 UTC m=+1.249406512" watchObservedRunningTime="2025-07-06 23:52:19.643560089 +0000 UTC m=+1.261371387" Jul 6 23:52:20.561513 kubelet[2497]: E0706 23:52:20.561399 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:20.561513 kubelet[2497]: E0706 23:52:20.561434 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:21.563035 kubelet[2497]: E0706 23:52:21.562965 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:24.371044 kubelet[2497]: I0706 23:52:24.371004 2497 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:52:24.373008 containerd[1471]: time="2025-07-06T23:52:24.372307142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:52:24.374155 kubelet[2497]: I0706 23:52:24.372555 2497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:52:25.200193 kubelet[2497]: E0706 23:52:25.200086 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:25.360499 systemd[1]: Created slice kubepods-besteffort-pod6690b985_286d_47cf_903d_6dd7d908d2b6.slice - libcontainer container kubepods-besteffort-pod6690b985_286d_47cf_903d_6dd7d908d2b6.slice. Jul 6 23:52:25.454580 kubelet[2497]: I0706 23:52:25.454351 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6690b985-286d-47cf-903d-6dd7d908d2b6-kube-proxy\") pod \"kube-proxy-lzvj9\" (UID: \"6690b985-286d-47cf-903d-6dd7d908d2b6\") " pod="kube-system/kube-proxy-lzvj9" Jul 6 23:52:25.454580 kubelet[2497]: I0706 23:52:25.454390 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6690b985-286d-47cf-903d-6dd7d908d2b6-xtables-lock\") pod \"kube-proxy-lzvj9\" (UID: \"6690b985-286d-47cf-903d-6dd7d908d2b6\") " pod="kube-system/kube-proxy-lzvj9" Jul 6 23:52:25.454580 kubelet[2497]: I0706 23:52:25.454408 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6690b985-286d-47cf-903d-6dd7d908d2b6-lib-modules\") pod \"kube-proxy-lzvj9\" (UID: \"6690b985-286d-47cf-903d-6dd7d908d2b6\") " pod="kube-system/kube-proxy-lzvj9" Jul 6 23:52:25.454580 kubelet[2497]: I0706 23:52:25.454425 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58vd\" (UniqueName: \"kubernetes.io/projected/6690b985-286d-47cf-903d-6dd7d908d2b6-kube-api-access-v58vd\") pod \"kube-proxy-lzvj9\" (UID: \"6690b985-286d-47cf-903d-6dd7d908d2b6\") " pod="kube-system/kube-proxy-lzvj9" Jul 6 23:52:25.483406 systemd[1]: Created slice kubepods-besteffort-pod043e4350_41b9_43fa_a910_141ad5715868.slice - libcontainer container kubepods-besteffort-pod043e4350_41b9_43fa_a910_141ad5715868.slice. Jul 6 23:52:25.554966 kubelet[2497]: I0706 23:52:25.554909 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5fc\" (UniqueName: \"kubernetes.io/projected/043e4350-41b9-43fa-a910-141ad5715868-kube-api-access-wk5fc\") pod \"tigera-operator-747864d56d-6nwxf\" (UID: \"043e4350-41b9-43fa-a910-141ad5715868\") " pod="tigera-operator/tigera-operator-747864d56d-6nwxf" Jul 6 23:52:25.554966 kubelet[2497]: I0706 23:52:25.554972 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/043e4350-41b9-43fa-a910-141ad5715868-var-lib-calico\") pod \"tigera-operator-747864d56d-6nwxf\" (UID: \"043e4350-41b9-43fa-a910-141ad5715868\") " pod="tigera-operator/tigera-operator-747864d56d-6nwxf" Jul 6 23:52:25.572937 kubelet[2497]: E0706 23:52:25.572893 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:25.674969 kubelet[2497]: E0706 23:52:25.674916 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:25.676089 containerd[1471]: time="2025-07-06T23:52:25.676014038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzvj9,Uid:6690b985-286d-47cf-903d-6dd7d908d2b6,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:25.703862 containerd[1471]: time="2025-07-06T23:52:25.702790990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:25.703862 containerd[1471]: time="2025-07-06T23:52:25.702861807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:25.703862 containerd[1471]: time="2025-07-06T23:52:25.702879995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:25.704061 containerd[1471]: time="2025-07-06T23:52:25.703808159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:25.737198 systemd[1]: Started cri-containerd-e48639a48bfd77571a4629a7add219345e747aa0a82273141bb257a6a5a457c9.scope - libcontainer container e48639a48bfd77571a4629a7add219345e747aa0a82273141bb257a6a5a457c9. Jul 6 23:52:25.768230 containerd[1471]: time="2025-07-06T23:52:25.768116922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzvj9,Uid:6690b985-286d-47cf-903d-6dd7d908d2b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e48639a48bfd77571a4629a7add219345e747aa0a82273141bb257a6a5a457c9\"" Jul 6 23:52:25.769478 kubelet[2497]: E0706 23:52:25.769432 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:25.773160 containerd[1471]: time="2025-07-06T23:52:25.772761410Z" level=info msg="CreateContainer within sandbox \"e48639a48bfd77571a4629a7add219345e747aa0a82273141bb257a6a5a457c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:52:25.788947 containerd[1471]: time="2025-07-06T23:52:25.788670743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6nwxf,Uid:043e4350-41b9-43fa-a910-141ad5715868,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:52:25.792209 containerd[1471]: time="2025-07-06T23:52:25.792157786Z" level=info msg="CreateContainer within sandbox \"e48639a48bfd77571a4629a7add219345e747aa0a82273141bb257a6a5a457c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24a07a59cae6097f78158537b922d63b080a19a8fe437a6814b64fd60039667e\"" Jul 6 23:52:25.793708 containerd[1471]: time="2025-07-06T23:52:25.793436618Z" level=info msg="StartContainer for \"24a07a59cae6097f78158537b922d63b080a19a8fe437a6814b64fd60039667e\"" Jul 6 23:52:25.839321 systemd[1]: Started cri-containerd-24a07a59cae6097f78158537b922d63b080a19a8fe437a6814b64fd60039667e.scope - libcontainer container 24a07a59cae6097f78158537b922d63b080a19a8fe437a6814b64fd60039667e. Jul 6 23:52:25.843888 containerd[1471]: time="2025-07-06T23:52:25.843675518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:25.844439 containerd[1471]: time="2025-07-06T23:52:25.843996565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:25.844439 containerd[1471]: time="2025-07-06T23:52:25.844066057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:25.846565 containerd[1471]: time="2025-07-06T23:52:25.846339879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:25.874011 systemd[1]: Started cri-containerd-ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0.scope - libcontainer container ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0. Jul 6 23:52:25.893508 containerd[1471]: time="2025-07-06T23:52:25.892773497Z" level=info msg="StartContainer for \"24a07a59cae6097f78158537b922d63b080a19a8fe437a6814b64fd60039667e\" returns successfully" Jul 6 23:52:25.951354 containerd[1471]: time="2025-07-06T23:52:25.951306007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6nwxf,Uid:043e4350-41b9-43fa-a910-141ad5715868,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0\"" Jul 6 23:52:25.955894 containerd[1471]: time="2025-07-06T23:52:25.955844385Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:52:26.468249 kubelet[2497]: E0706 23:52:26.467824 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.584604 kubelet[2497]: E0706 23:52:26.582508 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.585707 kubelet[2497]: E0706 23:52:26.585664 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.586180 kubelet[2497]: E0706 23:52:26.586153 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.612450 kubelet[2497]: I0706 23:52:26.612397 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzvj9" podStartSLOduration=1.61237803 podStartE2EDuration="1.61237803s" podCreationTimestamp="2025-07-06 23:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:26.596846862 +0000 UTC m=+8.214658167" watchObservedRunningTime="2025-07-06 23:52:26.61237803 +0000 UTC m=+8.230189330" Jul 6 23:52:27.450308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590899645.mount: Deactivated successfully. Jul 6 23:52:28.964221 systemd-timesyncd[1340]: Contacted time server 23.168.24.210:123 (2.flatcar.pool.ntp.org). Jul 6 23:52:28.964239 systemd-resolved[1325]: Clock change detected. Flushing caches. Jul 6 23:52:28.964305 systemd-timesyncd[1340]: Initial clock synchronization to Sun 2025-07-06 23:52:28.963716 UTC. Jul 6 23:52:29.184000 containerd[1471]: time="2025-07-06T23:52:29.182749968Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:29.184000 containerd[1471]: time="2025-07-06T23:52:29.183553546Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:52:29.184000 containerd[1471]: time="2025-07-06T23:52:29.183705006Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:29.186329 containerd[1471]: time="2025-07-06T23:52:29.186246520Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:29.187783 containerd[1471]: time="2025-07-06T23:52:29.187351840Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.114003763s" Jul 6 23:52:29.187783 containerd[1471]: time="2025-07-06T23:52:29.187393497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:52:29.190888 containerd[1471]: time="2025-07-06T23:52:29.190758045Z" level=info msg="CreateContainer within sandbox \"ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:52:29.212301 containerd[1471]: time="2025-07-06T23:52:29.212253712Z" level=info msg="CreateContainer within sandbox \"ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d\"" Jul 6 23:52:29.212922 containerd[1471]: time="2025-07-06T23:52:29.212745969Z" level=info msg="StartContainer for \"34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d\"" Jul 6 23:52:29.245442 systemd[1]: run-containerd-runc-k8s.io-34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d-runc.gP1xdc.mount: Deactivated successfully. Jul 6 23:52:29.257305 systemd[1]: Started cri-containerd-34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d.scope - libcontainer container 34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d. Jul 6 23:52:29.291601 containerd[1471]: time="2025-07-06T23:52:29.291548359Z" level=info msg="StartContainer for \"34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d\" returns successfully" Jul 6 23:52:29.592831 kubelet[2497]: E0706 23:52:29.592410 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:29.713068 kubelet[2497]: E0706 23:52:29.711261 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:29.743242 kubelet[2497]: I0706 23:52:29.743180 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6nwxf" podStartSLOduration=2.626078057 podStartE2EDuration="4.743160956s" podCreationTimestamp="2025-07-06 23:52:25 +0000 UTC" firstStartedPulling="2025-07-06 23:52:25.954225309 +0000 UTC m=+7.572036588" lastFinishedPulling="2025-07-06 23:52:29.188736272 +0000 UTC m=+9.689119487" observedRunningTime="2025-07-06 23:52:29.72787115 +0000 UTC m=+10.228254389" watchObservedRunningTime="2025-07-06 23:52:29.743160956 +0000 UTC m=+10.243544192" Jul 6 23:52:30.714064 kubelet[2497]: E0706 23:52:30.713764 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:32.663191 systemd[1]: cri-containerd-34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d.scope: Deactivated successfully. Jul 6 23:52:32.703011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d-rootfs.mount: Deactivated successfully. Jul 6 23:52:32.706071 containerd[1471]: time="2025-07-06T23:52:32.705981590Z" level=info msg="shim disconnected" id=34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d namespace=k8s.io Jul 6 23:52:32.707993 containerd[1471]: time="2025-07-06T23:52:32.706746758Z" level=warning msg="cleaning up after shim disconnected" id=34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d namespace=k8s.io Jul 6 23:52:32.707993 containerd[1471]: time="2025-07-06T23:52:32.706782523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:32.734882 containerd[1471]: time="2025-07-06T23:52:32.734793678Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:52:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:52:33.727537 kubelet[2497]: I0706 23:52:33.727221 2497 scope.go:117] "RemoveContainer" containerID="34ed06165e57118db51e43390bd0af15c5c23ca69a24f58cfe904518d6db144d" Jul 6 23:52:33.735986 containerd[1471]: time="2025-07-06T23:52:33.735475790Z" level=info msg="CreateContainer within sandbox \"ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 6 23:52:33.762003 containerd[1471]: time="2025-07-06T23:52:33.760551619Z" level=info msg="CreateContainer within sandbox \"ba534d8164d1bd4793549177126efccd5bae9a26027cf0dd0762ec1eb7ffb7f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"81537548a18dfa55fba4eb3d41a36986e462652aad52ee5ae191ba27082e2dcf\"" Jul 6 23:52:33.764094 containerd[1471]: time="2025-07-06T23:52:33.763076828Z" level=info msg="StartContainer for \"81537548a18dfa55fba4eb3d41a36986e462652aad52ee5ae191ba27082e2dcf\"" Jul 6 23:52:33.808428 systemd[1]: Started cri-containerd-81537548a18dfa55fba4eb3d41a36986e462652aad52ee5ae191ba27082e2dcf.scope - libcontainer container 81537548a18dfa55fba4eb3d41a36986e462652aad52ee5ae191ba27082e2dcf. Jul 6 23:52:33.858621 containerd[1471]: time="2025-07-06T23:52:33.858396099Z" level=info msg="StartContainer for \"81537548a18dfa55fba4eb3d41a36986e462652aad52ee5ae191ba27082e2dcf\" returns successfully" Jul 6 23:52:35.778033 update_engine[1450]: I20250706 23:52:35.777025 1450 update_attempter.cc:509] Updating boot flags... Jul 6 23:52:35.880141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2931) Jul 6 23:52:36.127071 sudo[1649]: pam_unix(sudo:session): session closed for user root Jul 6 23:52:36.131668 sshd[1646]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:36.136948 systemd[1]: sshd@6-209.38.68.255:22-139.178.89.65:34982.service: Deactivated successfully. Jul 6 23:52:36.141283 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:52:36.142323 systemd[1]: session-7.scope: Consumed 4.759s CPU time, 144.9M memory peak, 0B memory swap peak. Jul 6 23:52:36.144796 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:52:36.146814 systemd-logind[1445]: Removed session 7. Jul 6 23:52:41.827384 systemd[1]: Created slice kubepods-besteffort-pod3a19c2a9_772c_48fc_bf88_2a9f2ecdc6c3.slice - libcontainer container kubepods-besteffort-pod3a19c2a9_772c_48fc_bf88_2a9f2ecdc6c3.slice. Jul 6 23:52:41.875144 kubelet[2497]: I0706 23:52:41.875085 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3-tigera-ca-bundle\") pod \"calico-typha-86f6458f9d-b2tg5\" (UID: \"3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3\") " pod="calico-system/calico-typha-86f6458f9d-b2tg5" Jul 6 23:52:41.875144 kubelet[2497]: I0706 23:52:41.875139 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7569n\" (UniqueName: \"kubernetes.io/projected/3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3-kube-api-access-7569n\") pod \"calico-typha-86f6458f9d-b2tg5\" (UID: \"3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3\") " pod="calico-system/calico-typha-86f6458f9d-b2tg5" Jul 6 23:52:41.875807 kubelet[2497]: I0706 23:52:41.875177 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3-typha-certs\") pod \"calico-typha-86f6458f9d-b2tg5\" (UID: \"3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3\") " pod="calico-system/calico-typha-86f6458f9d-b2tg5" Jul 6 23:52:42.136060 kubelet[2497]: E0706 23:52:42.135907 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:42.139793 containerd[1471]: time="2025-07-06T23:52:42.138425240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f6458f9d-b2tg5,Uid:3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:42.177119 systemd[1]: Created slice kubepods-besteffort-pod413e0965_a56e_4368_b675_03af71093c4e.slice - libcontainer container kubepods-besteffort-pod413e0965_a56e_4368_b675_03af71093c4e.slice. Jul 6 23:52:42.178657 kubelet[2497]: I0706 23:52:42.177477 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-policysync\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.178657 kubelet[2497]: I0706 23:52:42.177522 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-var-lib-calico\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.178657 kubelet[2497]: I0706 23:52:42.177543 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-flexvol-driver-host\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.178657 kubelet[2497]: I0706 23:52:42.177569 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-cni-bin-dir\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.178657 kubelet[2497]: I0706 23:52:42.177586 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-cni-log-dir\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180423 kubelet[2497]: I0706 23:52:42.177604 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-cni-net-dir\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180423 kubelet[2497]: I0706 23:52:42.177626 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-xtables-lock\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180423 kubelet[2497]: I0706 23:52:42.177646 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-lib-modules\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180423 kubelet[2497]: I0706 23:52:42.177666 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/413e0965-a56e-4368-b675-03af71093c4e-node-certs\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180423 kubelet[2497]: I0706 23:52:42.177682 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ss9s\" (UniqueName: \"kubernetes.io/projected/413e0965-a56e-4368-b675-03af71093c4e-kube-api-access-4ss9s\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180562 kubelet[2497]: I0706 23:52:42.177700 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/413e0965-a56e-4368-b675-03af71093c4e-var-run-calico\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.180562 kubelet[2497]: I0706 23:52:42.177716 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413e0965-a56e-4368-b675-03af71093c4e-tigera-ca-bundle\") pod \"calico-node-nxx25\" (UID: \"413e0965-a56e-4368-b675-03af71093c4e\") " pod="calico-system/calico-node-nxx25" Jul 6 23:52:42.202070 containerd[1471]: time="2025-07-06T23:52:42.200328366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:42.202070 containerd[1471]: time="2025-07-06T23:52:42.200391192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:42.202070 containerd[1471]: time="2025-07-06T23:52:42.200402300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:42.202070 containerd[1471]: time="2025-07-06T23:52:42.200494073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:42.242301 systemd[1]: Started cri-containerd-e671989b8f66317dd26077ae1364e26a8190ca36c8c1c2f336a0fa17ba06c67c.scope - libcontainer container e671989b8f66317dd26077ae1364e26a8190ca36c8c1c2f336a0fa17ba06c67c. Jul 6 23:52:42.294073 kubelet[2497]: E0706 23:52:42.294038 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.294073 kubelet[2497]: W0706 23:52:42.294064 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.302033 kubelet[2497]: E0706 23:52:42.301658 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.302033 kubelet[2497]: W0706 23:52:42.301685 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.304194 kubelet[2497]: E0706 23:52:42.304152 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.304348 kubelet[2497]: E0706 23:52:42.304152 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.356006 containerd[1471]: time="2025-07-06T23:52:42.355920544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f6458f9d-b2tg5,Uid:3a19c2a9-772c-48fc-bf88-2a9f2ecdc6c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e671989b8f66317dd26077ae1364e26a8190ca36c8c1c2f336a0fa17ba06c67c\"" Jul 6 23:52:42.364074 kubelet[2497]: E0706 23:52:42.363503 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:42.369881 containerd[1471]: time="2025-07-06T23:52:42.369509477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:52:42.391737 kubelet[2497]: E0706 23:52:42.391575 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:42.465853 kubelet[2497]: E0706 23:52:42.465563 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.465853 kubelet[2497]: W0706 23:52:42.465602 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.465853 kubelet[2497]: E0706 23:52:42.465631 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.466087 kubelet[2497]: E0706 23:52:42.465944 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.466087 kubelet[2497]: W0706 23:52:42.465955 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.466087 kubelet[2497]: E0706 23:52:42.466010 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.466585 kubelet[2497]: E0706 23:52:42.466251 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.466585 kubelet[2497]: W0706 23:52:42.466264 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.466585 kubelet[2497]: E0706 23:52:42.466274 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.472921 kubelet[2497]: E0706 23:52:42.472436 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.472921 kubelet[2497]: W0706 23:52:42.472476 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.472921 kubelet[2497]: E0706 23:52:42.472502 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.472921 kubelet[2497]: E0706 23:52:42.472867 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.472921 kubelet[2497]: W0706 23:52:42.472880 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.472921 kubelet[2497]: E0706 23:52:42.472896 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.473241 kubelet[2497]: E0706 23:52:42.473132 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.473241 kubelet[2497]: W0706 23:52:42.473141 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.473241 kubelet[2497]: E0706 23:52:42.473153 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473348 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.474762 kubelet[2497]: W0706 23:52:42.473360 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473370 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473630 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.474762 kubelet[2497]: W0706 23:52:42.473642 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473654 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473882 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.474762 kubelet[2497]: W0706 23:52:42.473890 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.473900 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.474762 kubelet[2497]: E0706 23:52:42.474125 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475160 kubelet[2497]: W0706 23:52:42.474163 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474180 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474406 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475160 kubelet[2497]: W0706 23:52:42.474415 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474426 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474675 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475160 kubelet[2497]: W0706 23:52:42.474686 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474696 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475160 kubelet[2497]: E0706 23:52:42.474901 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475160 kubelet[2497]: W0706 23:52:42.474909 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475403 kubelet[2497]: E0706 23:52:42.474918 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475403 kubelet[2497]: E0706 23:52:42.475165 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475403 kubelet[2497]: W0706 23:52:42.475174 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475403 kubelet[2497]: E0706 23:52:42.475184 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475403 kubelet[2497]: E0706 23:52:42.475386 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475527 kubelet[2497]: W0706 23:52:42.475407 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475527 kubelet[2497]: E0706 23:52:42.475418 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475692 kubelet[2497]: E0706 23:52:42.475673 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475692 kubelet[2497]: W0706 23:52:42.475688 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475781 kubelet[2497]: E0706 23:52:42.475698 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.475919 kubelet[2497]: E0706 23:52:42.475899 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.475919 kubelet[2497]: W0706 23:52:42.475910 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.475919 kubelet[2497]: E0706 23:52:42.475920 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.476539 kubelet[2497]: E0706 23:52:42.476144 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.476539 kubelet[2497]: W0706 23:52:42.476155 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.476539 kubelet[2497]: E0706 23:52:42.476166 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.476539 kubelet[2497]: E0706 23:52:42.476365 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.476539 kubelet[2497]: W0706 23:52:42.476374 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.476539 kubelet[2497]: E0706 23:52:42.476382 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.477149 kubelet[2497]: E0706 23:52:42.477132 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.477149 kubelet[2497]: W0706 23:52:42.477148 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.477250 kubelet[2497]: E0706 23:52:42.477160 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.480716 kubelet[2497]: E0706 23:52:42.480683 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.480716 kubelet[2497]: W0706 23:52:42.480700 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.480833 kubelet[2497]: E0706 23:52:42.480715 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.480833 kubelet[2497]: I0706 23:52:42.480769 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4d7dcac2-ec06-4a54-afc7-632e8abadb5b-socket-dir\") pod \"csi-node-driver-r52x4\" (UID: \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\") " pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:42.481309 kubelet[2497]: E0706 23:52:42.481102 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.481309 kubelet[2497]: W0706 23:52:42.481119 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.481309 kubelet[2497]: E0706 23:52:42.481160 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.481309 kubelet[2497]: I0706 23:52:42.481188 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d7dcac2-ec06-4a54-afc7-632e8abadb5b-kubelet-dir\") pod \"csi-node-driver-r52x4\" (UID: \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\") " pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:42.481937 kubelet[2497]: E0706 23:52:42.481864 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.481937 kubelet[2497]: W0706 23:52:42.481882 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.481937 kubelet[2497]: E0706 23:52:42.481898 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.481937 kubelet[2497]: I0706 23:52:42.481917 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4d7dcac2-ec06-4a54-afc7-632e8abadb5b-varrun\") pod \"csi-node-driver-r52x4\" (UID: \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\") " pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:42.482399 kubelet[2497]: E0706 23:52:42.482190 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.482399 kubelet[2497]: W0706 23:52:42.482203 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.482399 kubelet[2497]: E0706 23:52:42.482303 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.482399 kubelet[2497]: I0706 23:52:42.482323 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4d7dcac2-ec06-4a54-afc7-632e8abadb5b-registration-dir\") pod \"csi-node-driver-r52x4\" (UID: \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\") " pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:42.482622 kubelet[2497]: E0706 23:52:42.482515 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.482622 kubelet[2497]: W0706 23:52:42.482540 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.482622 kubelet[2497]: E0706 23:52:42.482616 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.483188 kubelet[2497]: E0706 23:52:42.482789 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.483188 kubelet[2497]: W0706 23:52:42.482801 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.483188 kubelet[2497]: E0706 23:52:42.482855 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.483188 kubelet[2497]: E0706 23:52:42.483032 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.483188 kubelet[2497]: W0706 23:52:42.483040 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.483188 kubelet[2497]: E0706 23:52:42.483084 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.483377 kubelet[2497]: E0706 23:52:42.483291 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.483377 kubelet[2497]: W0706 23:52:42.483299 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.483377 kubelet[2497]: E0706 23:52:42.483311 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.483377 kubelet[2497]: I0706 23:52:42.483327 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vk4j\" (UniqueName: \"kubernetes.io/projected/4d7dcac2-ec06-4a54-afc7-632e8abadb5b-kube-api-access-8vk4j\") pod \"csi-node-driver-r52x4\" (UID: \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\") " pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:42.483795 kubelet[2497]: E0706 23:52:42.483496 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.483795 kubelet[2497]: W0706 23:52:42.483516 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.483795 kubelet[2497]: E0706 23:52:42.483540 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.483795 kubelet[2497]: E0706 23:52:42.483731 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.483795 kubelet[2497]: W0706 23:52:42.483745 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.483795 kubelet[2497]: E0706 23:52:42.483758 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.484261 kubelet[2497]: E0706 23:52:42.484032 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.484261 kubelet[2497]: W0706 23:52:42.484045 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.484261 kubelet[2497]: E0706 23:52:42.484054 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.484261 kubelet[2497]: E0706 23:52:42.484239 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.484261 kubelet[2497]: W0706 23:52:42.484246 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.484261 kubelet[2497]: E0706 23:52:42.484265 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.484549 kubelet[2497]: E0706 23:52:42.484422 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.484549 kubelet[2497]: W0706 23:52:42.484428 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.484549 kubelet[2497]: E0706 23:52:42.484436 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.484680 kubelet[2497]: E0706 23:52:42.484650 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.484680 kubelet[2497]: W0706 23:52:42.484658 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.484680 kubelet[2497]: E0706 23:52:42.484667 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.485376 kubelet[2497]: E0706 23:52:42.484852 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.485376 kubelet[2497]: W0706 23:52:42.484863 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.485376 kubelet[2497]: E0706 23:52:42.484871 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.486053 containerd[1471]: time="2025-07-06T23:52:42.485929010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxx25,Uid:413e0965-a56e-4368-b675-03af71093c4e,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:42.528458 containerd[1471]: time="2025-07-06T23:52:42.527578205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:42.528458 containerd[1471]: time="2025-07-06T23:52:42.527646965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:42.528458 containerd[1471]: time="2025-07-06T23:52:42.527662287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:42.528458 containerd[1471]: time="2025-07-06T23:52:42.527796453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:42.576158 systemd[1]: Started cri-containerd-4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0.scope - libcontainer container 4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0. Jul 6 23:52:42.613166 kubelet[2497]: E0706 23:52:42.613111 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.613166 kubelet[2497]: W0706 23:52:42.613160 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.613368 kubelet[2497]: E0706 23:52:42.613254 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.616062 kubelet[2497]: E0706 23:52:42.616012 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.616062 kubelet[2497]: W0706 23:52:42.616051 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.616288 kubelet[2497]: E0706 23:52:42.616098 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.625430 kubelet[2497]: E0706 23:52:42.624535 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.625430 kubelet[2497]: W0706 23:52:42.624562 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.625430 kubelet[2497]: E0706 23:52:42.624601 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.629348 kubelet[2497]: E0706 23:52:42.628952 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.629348 kubelet[2497]: W0706 23:52:42.628992 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.629348 kubelet[2497]: E0706 23:52:42.629023 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.633107 kubelet[2497]: E0706 23:52:42.632936 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.633107 kubelet[2497]: W0706 23:52:42.633032 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.633107 kubelet[2497]: E0706 23:52:42.633080 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.636603 kubelet[2497]: E0706 23:52:42.636396 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.636603 kubelet[2497]: W0706 23:52:42.636419 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.636603 kubelet[2497]: E0706 23:52:42.636442 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.640162 kubelet[2497]: E0706 23:52:42.640125 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.640162 kubelet[2497]: W0706 23:52:42.640153 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.640612 kubelet[2497]: E0706 23:52:42.640359 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.642141 kubelet[2497]: E0706 23:52:42.641944 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.642141 kubelet[2497]: W0706 23:52:42.641995 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.642141 kubelet[2497]: E0706 23:52:42.642052 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.647482 kubelet[2497]: E0706 23:52:42.647442 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.647482 kubelet[2497]: W0706 23:52:42.647475 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.648389 kubelet[2497]: E0706 23:52:42.647584 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.651440 kubelet[2497]: E0706 23:52:42.651396 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.651440 kubelet[2497]: W0706 23:52:42.651430 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.652222 kubelet[2497]: E0706 23:52:42.652183 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.655935 kubelet[2497]: E0706 23:52:42.655583 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.656084 kubelet[2497]: W0706 23:52:42.655940 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.656117 kubelet[2497]: E0706 23:52:42.656079 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.656454 kubelet[2497]: E0706 23:52:42.656428 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.656454 kubelet[2497]: W0706 23:52:42.656450 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.656583 kubelet[2497]: E0706 23:52:42.656554 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.658072 kubelet[2497]: E0706 23:52:42.658028 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.658072 kubelet[2497]: W0706 23:52:42.658068 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.658941 kubelet[2497]: E0706 23:52:42.658244 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.659831 kubelet[2497]: E0706 23:52:42.659796 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.659831 kubelet[2497]: W0706 23:52:42.659825 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.660202 kubelet[2497]: E0706 23:52:42.660176 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.661373 kubelet[2497]: E0706 23:52:42.661304 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.661373 kubelet[2497]: W0706 23:52:42.661327 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.661903 kubelet[2497]: E0706 23:52:42.661481 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.662600 kubelet[2497]: E0706 23:52:42.662577 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.662600 kubelet[2497]: W0706 23:52:42.662595 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.663035 kubelet[2497]: E0706 23:52:42.663010 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.663988 kubelet[2497]: E0706 23:52:42.663472 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.664484 kubelet[2497]: W0706 23:52:42.664134 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.664484 kubelet[2497]: E0706 23:52:42.664192 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.666263 kubelet[2497]: E0706 23:52:42.666148 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.666263 kubelet[2497]: W0706 23:52:42.666165 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.667007 kubelet[2497]: E0706 23:52:42.666530 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.667007 kubelet[2497]: W0706 23:52:42.666545 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.667007 kubelet[2497]: E0706 23:52:42.666602 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.667007 kubelet[2497]: E0706 23:52:42.666645 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.668988 kubelet[2497]: E0706 23:52:42.667499 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.668988 kubelet[2497]: W0706 23:52:42.667520 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.668988 kubelet[2497]: E0706 23:52:42.667551 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.669323 kubelet[2497]: E0706 23:52:42.669180 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.669323 kubelet[2497]: W0706 23:52:42.669194 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.669323 kubelet[2497]: E0706 23:52:42.669223 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.669463 kubelet[2497]: E0706 23:52:42.669453 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.669507 kubelet[2497]: W0706 23:52:42.669498 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.669629 kubelet[2497]: E0706 23:52:42.669614 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.669807 kubelet[2497]: E0706 23:52:42.669795 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.669893 kubelet[2497]: W0706 23:52:42.669875 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.670798 kubelet[2497]: E0706 23:52:42.670772 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.671119 kubelet[2497]: E0706 23:52:42.671105 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.671290 kubelet[2497]: W0706 23:52:42.671179 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.671290 kubelet[2497]: E0706 23:52:42.671212 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.672015 kubelet[2497]: E0706 23:52:42.671409 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.672082 kubelet[2497]: W0706 23:52:42.672028 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.672082 kubelet[2497]: E0706 23:52:42.672046 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.688272 kubelet[2497]: E0706 23:52:42.688226 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:42.688272 kubelet[2497]: W0706 23:52:42.688262 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:42.688470 kubelet[2497]: E0706 23:52:42.688296 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:42.744160 containerd[1471]: time="2025-07-06T23:52:42.744066661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxx25,Uid:413e0965-a56e-4368-b675-03af71093c4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\"" Jul 6 23:52:43.652174 kubelet[2497]: E0706 23:52:43.652017 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:43.701427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099683874.mount: Deactivated successfully. Jul 6 23:52:44.537393 containerd[1471]: time="2025-07-06T23:52:44.537327691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:44.558022 containerd[1471]: time="2025-07-06T23:52:44.557917338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:52:44.559144 containerd[1471]: time="2025-07-06T23:52:44.559093660Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:44.562144 containerd[1471]: time="2025-07-06T23:52:44.561532518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:44.563313 containerd[1471]: time="2025-07-06T23:52:44.562827122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.193263276s" Jul 6 23:52:44.563313 containerd[1471]: time="2025-07-06T23:52:44.562867628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:52:44.574117 containerd[1471]: time="2025-07-06T23:52:44.574077704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:52:44.621167 containerd[1471]: time="2025-07-06T23:52:44.620915831Z" level=info msg="CreateContainer within sandbox \"e671989b8f66317dd26077ae1364e26a8190ca36c8c1c2f336a0fa17ba06c67c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:52:44.638690 containerd[1471]: time="2025-07-06T23:52:44.638616233Z" level=info msg="CreateContainer within sandbox \"e671989b8f66317dd26077ae1364e26a8190ca36c8c1c2f336a0fa17ba06c67c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"de8287ecd4550e5531f9891d577dd97ec5df031c3e2e82a61bb98714780a9d5d\"" Jul 6 23:52:44.639170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698682939.mount: Deactivated successfully. Jul 6 23:52:44.646326 containerd[1471]: time="2025-07-06T23:52:44.646155641Z" level=info msg="StartContainer for \"de8287ecd4550e5531f9891d577dd97ec5df031c3e2e82a61bb98714780a9d5d\"" Jul 6 23:52:44.747295 systemd[1]: Started cri-containerd-de8287ecd4550e5531f9891d577dd97ec5df031c3e2e82a61bb98714780a9d5d.scope - libcontainer container de8287ecd4550e5531f9891d577dd97ec5df031c3e2e82a61bb98714780a9d5d. Jul 6 23:52:44.826127 containerd[1471]: time="2025-07-06T23:52:44.824749803Z" level=info msg="StartContainer for \"de8287ecd4550e5531f9891d577dd97ec5df031c3e2e82a61bb98714780a9d5d\" returns successfully" Jul 6 23:52:45.655259 kubelet[2497]: E0706 23:52:45.654086 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:45.787020 kubelet[2497]: E0706 23:52:45.786952 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:45.797248 containerd[1471]: time="2025-07-06T23:52:45.797196653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:45.798859 containerd[1471]: time="2025-07-06T23:52:45.798790830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:52:45.799507 containerd[1471]: time="2025-07-06T23:52:45.799146763Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:45.802830 containerd[1471]: time="2025-07-06T23:52:45.802769254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:45.803847 containerd[1471]: time="2025-07-06T23:52:45.803776406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.229062503s" Jul 6 23:52:45.803847 containerd[1471]: time="2025-07-06T23:52:45.803835341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:52:45.811278 containerd[1471]: time="2025-07-06T23:52:45.810558088Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832069 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833521 kubelet[2497]: W0706 23:52:45.832106 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832139 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832514 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833521 kubelet[2497]: W0706 23:52:45.832532 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832553 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832815 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833521 kubelet[2497]: W0706 23:52:45.832829 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.832843 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833521 kubelet[2497]: E0706 23:52:45.833174 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833923 kubelet[2497]: W0706 23:52:45.833186 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833200 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833440 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833923 kubelet[2497]: W0706 23:52:45.833449 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833464 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833675 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833923 kubelet[2497]: W0706 23:52:45.833686 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833699 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.833923 kubelet[2497]: E0706 23:52:45.833878 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.833923 kubelet[2497]: W0706 23:52:45.833889 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.834214 kubelet[2497]: E0706 23:52:45.833901 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.834214 kubelet[2497]: E0706 23:52:45.834160 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.834214 kubelet[2497]: W0706 23:52:45.834172 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.834214 kubelet[2497]: E0706 23:52:45.834185 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.834596 kubelet[2497]: E0706 23:52:45.834433 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.834596 kubelet[2497]: W0706 23:52:45.834448 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.834596 kubelet[2497]: E0706 23:52:45.834476 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.834752 kubelet[2497]: E0706 23:52:45.834695 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.834752 kubelet[2497]: W0706 23:52:45.834706 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.834752 kubelet[2497]: E0706 23:52:45.834720 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.834933 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.836089 kubelet[2497]: W0706 23:52:45.834957 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.834986 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.835207 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.836089 kubelet[2497]: W0706 23:52:45.835218 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.835231 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.835506 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.836089 kubelet[2497]: W0706 23:52:45.835518 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.835533 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.836089 kubelet[2497]: E0706 23:52:45.835764 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.836459 kubelet[2497]: W0706 23:52:45.835775 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.836459 kubelet[2497]: E0706 23:52:45.835788 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.836459 kubelet[2497]: E0706 23:52:45.836074 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.836459 kubelet[2497]: W0706 23:52:45.836087 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.836459 kubelet[2497]: E0706 23:52:45.836101 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.845536 containerd[1471]: time="2025-07-06T23:52:45.845479031Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45\"" Jul 6 23:52:45.847597 containerd[1471]: time="2025-07-06T23:52:45.847535348Z" level=info msg="StartContainer for \"dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45\"" Jul 6 23:52:45.872695 kubelet[2497]: E0706 23:52:45.872657 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.873634 kubelet[2497]: W0706 23:52:45.873584 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.874345 kubelet[2497]: E0706 23:52:45.874146 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.875463 kubelet[2497]: E0706 23:52:45.875282 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.875463 kubelet[2497]: W0706 23:52:45.875304 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.876854 kubelet[2497]: E0706 23:52:45.876081 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.879763 kubelet[2497]: E0706 23:52:45.879332 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.879763 kubelet[2497]: W0706 23:52:45.879423 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.879763 kubelet[2497]: E0706 23:52:45.879454 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.881900 kubelet[2497]: E0706 23:52:45.881689 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.881900 kubelet[2497]: W0706 23:52:45.881713 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.881900 kubelet[2497]: E0706 23:52:45.881754 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.883544 kubelet[2497]: E0706 23:52:45.883230 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.883544 kubelet[2497]: W0706 23:52:45.883253 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.883544 kubelet[2497]: E0706 23:52:45.883401 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.883938 kubelet[2497]: E0706 23:52:45.883762 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.883938 kubelet[2497]: W0706 23:52:45.883775 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.884450 kubelet[2497]: E0706 23:52:45.884165 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.884450 kubelet[2497]: E0706 23:52:45.884296 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.884450 kubelet[2497]: W0706 23:52:45.884305 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.884450 kubelet[2497]: E0706 23:52:45.884357 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.884651 kubelet[2497]: E0706 23:52:45.884639 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.885543 kubelet[2497]: W0706 23:52:45.885269 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.885543 kubelet[2497]: E0706 23:52:45.885305 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.885707 kubelet[2497]: E0706 23:52:45.885646 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.885707 kubelet[2497]: W0706 23:52:45.885668 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.885707 kubelet[2497]: E0706 23:52:45.885699 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.885950 kubelet[2497]: E0706 23:52:45.885941 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.886229 kubelet[2497]: W0706 23:52:45.885952 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.886229 kubelet[2497]: E0706 23:52:45.885998 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.886229 kubelet[2497]: E0706 23:52:45.886191 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.886229 kubelet[2497]: W0706 23:52:45.886202 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.886373 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.886558 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.887377 kubelet[2497]: W0706 23:52:45.886575 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.886599 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.886864 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.887377 kubelet[2497]: W0706 23:52:45.886875 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.886889 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.887377 kubelet[2497]: E0706 23:52:45.887249 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.887651 kubelet[2497]: W0706 23:52:45.887379 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.887651 kubelet[2497]: E0706 23:52:45.887404 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.888904 kubelet[2497]: E0706 23:52:45.888726 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.888904 kubelet[2497]: W0706 23:52:45.888744 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.888904 kubelet[2497]: E0706 23:52:45.888761 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.889322 kubelet[2497]: E0706 23:52:45.889204 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.889322 kubelet[2497]: W0706 23:52:45.889217 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.889322 kubelet[2497]: E0706 23:52:45.889237 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.890074 kubelet[2497]: E0706 23:52:45.890058 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.890155 kubelet[2497]: W0706 23:52:45.890145 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.890209 kubelet[2497]: E0706 23:52:45.890200 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.890653 kubelet[2497]: E0706 23:52:45.890640 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:45.890729 kubelet[2497]: W0706 23:52:45.890719 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:45.890786 kubelet[2497]: E0706 23:52:45.890777 2497 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:45.919490 systemd[1]: Started cri-containerd-dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45.scope - libcontainer container dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45. Jul 6 23:52:45.975497 containerd[1471]: time="2025-07-06T23:52:45.974401114Z" level=info msg="StartContainer for \"dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45\" returns successfully" Jul 6 23:52:45.994152 systemd[1]: cri-containerd-dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45.scope: Deactivated successfully. Jul 6 23:52:46.029892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45-rootfs.mount: Deactivated successfully. Jul 6 23:52:46.039434 containerd[1471]: time="2025-07-06T23:52:46.039162172Z" level=info msg="shim disconnected" id=dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45 namespace=k8s.io Jul 6 23:52:46.039434 containerd[1471]: time="2025-07-06T23:52:46.039307809Z" level=warning msg="cleaning up after shim disconnected" id=dfd994eb8fa3e426b1aa03c6e2426d8f7f05ab15e6aaf692ae17c8c26c286f45 namespace=k8s.io Jul 6 23:52:46.039434 containerd[1471]: time="2025-07-06T23:52:46.039323953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:46.792365 kubelet[2497]: I0706 23:52:46.790698 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:52:46.792365 kubelet[2497]: E0706 23:52:46.791082 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:46.794627 containerd[1471]: time="2025-07-06T23:52:46.794577626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:52:46.814561 kubelet[2497]: I0706 23:52:46.813317 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86f6458f9d-b2tg5" podStartSLOduration=3.6084092610000003 podStartE2EDuration="5.813287389s" podCreationTimestamp="2025-07-06 23:52:41 +0000 UTC" firstStartedPulling="2025-07-06 23:52:42.36890059 +0000 UTC m=+22.869283819" lastFinishedPulling="2025-07-06 23:52:44.573778732 +0000 UTC m=+25.074161947" observedRunningTime="2025-07-06 23:52:45.817077334 +0000 UTC m=+26.317460568" watchObservedRunningTime="2025-07-06 23:52:46.813287389 +0000 UTC m=+27.313670635" Jul 6 23:52:47.652984 kubelet[2497]: E0706 23:52:47.651730 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:49.652889 kubelet[2497]: E0706 23:52:49.651928 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:51.403670 containerd[1471]: time="2025-07-06T23:52:51.403594342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.404836 containerd[1471]: time="2025-07-06T23:52:51.404773498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:52:51.406008 containerd[1471]: time="2025-07-06T23:52:51.405473526Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.409115 containerd[1471]: time="2025-07-06T23:52:51.407635928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.409115 containerd[1471]: time="2025-07-06T23:52:51.408877211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.614245487s" Jul 6 23:52:51.409115 containerd[1471]: time="2025-07-06T23:52:51.408925142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:52:51.413187 containerd[1471]: time="2025-07-06T23:52:51.413139985Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:52:51.452730 containerd[1471]: time="2025-07-06T23:52:51.452657717Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970\"" Jul 6 23:52:51.454883 containerd[1471]: time="2025-07-06T23:52:51.453851511Z" level=info msg="StartContainer for \"a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970\"" Jul 6 23:52:51.520252 systemd[1]: Started cri-containerd-a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970.scope - libcontainer container a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970. Jul 6 23:52:51.578428 containerd[1471]: time="2025-07-06T23:52:51.578379303Z" level=info msg="StartContainer for \"a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970\" returns successfully" Jul 6 23:52:51.654755 kubelet[2497]: E0706 23:52:51.654599 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:52.207164 systemd[1]: cri-containerd-a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970.scope: Deactivated successfully. Jul 6 23:52:52.243872 containerd[1471]: time="2025-07-06T23:52:52.242745628Z" level=info msg="shim disconnected" id=a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970 namespace=k8s.io Jul 6 23:52:52.243872 containerd[1471]: time="2025-07-06T23:52:52.242814876Z" level=warning msg="cleaning up after shim disconnected" id=a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970 namespace=k8s.io Jul 6 23:52:52.243872 containerd[1471]: time="2025-07-06T23:52:52.242826855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:52.243526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f558260568e47e3e79025a727b32c93d6e843ade5964250f75dccd8f1de970-rootfs.mount: Deactivated successfully. Jul 6 23:52:52.257932 kubelet[2497]: I0706 23:52:52.255919 2497 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:52:52.329707 systemd[1]: Created slice kubepods-burstable-pod0e94334e_f3fd_4a19_bf8e_6d83c5d49a81.slice - libcontainer container kubepods-burstable-pod0e94334e_f3fd_4a19_bf8e_6d83c5d49a81.slice. Jul 6 23:52:52.348752 systemd[1]: Created slice kubepods-besteffort-pod4c958244_9bad_4fa8_9b01_c7784873e0bf.slice - libcontainer container kubepods-besteffort-pod4c958244_9bad_4fa8_9b01_c7784873e0bf.slice. Jul 6 23:52:52.358392 systemd[1]: Created slice kubepods-besteffort-pod9370f950_40d0_4ae3_b3c3_c5c05feb1803.slice - libcontainer container kubepods-besteffort-pod9370f950_40d0_4ae3_b3c3_c5c05feb1803.slice. Jul 6 23:52:52.366264 systemd[1]: Created slice kubepods-besteffort-pode98209af_ab48_432a_83ec_cef900e23c8c.slice - libcontainer container kubepods-besteffort-pode98209af_ab48_432a_83ec_cef900e23c8c.slice. Jul 6 23:52:52.374892 systemd[1]: Created slice kubepods-besteffort-pod54c845ff_89ff_445b_9c32_19dae23f02f5.slice - libcontainer container kubepods-besteffort-pod54c845ff_89ff_445b_9c32_19dae23f02f5.slice. Jul 6 23:52:52.388808 systemd[1]: Created slice kubepods-besteffort-poda024894b_79af_4474_8f3a_f963becd00ab.slice - libcontainer container kubepods-besteffort-poda024894b_79af_4474_8f3a_f963becd00ab.slice. Jul 6 23:52:52.414372 systemd[1]: Created slice kubepods-burstable-pod0f170527_172c_4d49_bd6a_2a7a489db328.slice - libcontainer container kubepods-burstable-pod0f170527_172c_4d49_bd6a_2a7a489db328.slice. Jul 6 23:52:52.435043 kubelet[2497]: I0706 23:52:52.434957 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c845ff-89ff-445b-9c32-19dae23f02f5-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-7nwsf\" (UID: \"54c845ff-89ff-445b-9c32-19dae23f02f5\") " pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:52.435361 kubelet[2497]: I0706 23:52:52.435346 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx4k\" (UniqueName: \"kubernetes.io/projected/0f170527-172c-4d49-bd6a-2a7a489db328-kube-api-access-hnx4k\") pod \"coredns-668d6bf9bc-ssrs9\" (UID: \"0f170527-172c-4d49-bd6a-2a7a489db328\") " pod="kube-system/coredns-668d6bf9bc-ssrs9" Jul 6 23:52:52.435481 kubelet[2497]: I0706 23:52:52.435461 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f170527-172c-4d49-bd6a-2a7a489db328-config-volume\") pod \"coredns-668d6bf9bc-ssrs9\" (UID: \"0f170527-172c-4d49-bd6a-2a7a489db328\") " pod="kube-system/coredns-668d6bf9bc-ssrs9" Jul 6 23:52:52.435588 kubelet[2497]: I0706 23:52:52.435576 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp78l\" (UniqueName: \"kubernetes.io/projected/4c958244-9bad-4fa8-9b01-c7784873e0bf-kube-api-access-fp78l\") pod \"whisker-6f877578bb-gzfxs\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " pod="calico-system/whisker-6f877578bb-gzfxs" Jul 6 23:52:52.435740 kubelet[2497]: I0706 23:52:52.435675 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmr4f\" (UniqueName: \"kubernetes.io/projected/0e94334e-f3fd-4a19-bf8e-6d83c5d49a81-kube-api-access-dmr4f\") pod \"coredns-668d6bf9bc-5fvn5\" (UID: \"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81\") " pod="kube-system/coredns-668d6bf9bc-5fvn5" Jul 6 23:52:52.435892 kubelet[2497]: I0706 23:52:52.435802 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c845ff-89ff-445b-9c32-19dae23f02f5-config\") pod \"goldmane-768f4c5c69-7nwsf\" (UID: \"54c845ff-89ff-445b-9c32-19dae23f02f5\") " pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:52.435892 kubelet[2497]: I0706 23:52:52.435827 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a024894b-79af-4474-8f3a-f963becd00ab-calico-apiserver-certs\") pod \"calico-apiserver-6b5bcb5877-x5v6z\" (UID: \"a024894b-79af-4474-8f3a-f963becd00ab\") " pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" Jul 6 23:52:52.436108 kubelet[2497]: I0706 23:52:52.435843 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rtq\" (UniqueName: \"kubernetes.io/projected/a024894b-79af-4474-8f3a-f963becd00ab-kube-api-access-q9rtq\") pod \"calico-apiserver-6b5bcb5877-x5v6z\" (UID: \"a024894b-79af-4474-8f3a-f963becd00ab\") " pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" Jul 6 23:52:52.436108 kubelet[2497]: I0706 23:52:52.436065 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e94334e-f3fd-4a19-bf8e-6d83c5d49a81-config-volume\") pod \"coredns-668d6bf9bc-5fvn5\" (UID: \"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81\") " pod="kube-system/coredns-668d6bf9bc-5fvn5" Jul 6 23:52:52.436305 kubelet[2497]: I0706 23:52:52.436206 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9370f950-40d0-4ae3-b3c3-c5c05feb1803-calico-apiserver-certs\") pod \"calico-apiserver-6b5bcb5877-jhw4p\" (UID: \"9370f950-40d0-4ae3-b3c3-c5c05feb1803\") " pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" Jul 6 23:52:52.436305 kubelet[2497]: I0706 23:52:52.436233 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/54c845ff-89ff-445b-9c32-19dae23f02f5-goldmane-key-pair\") pod \"goldmane-768f4c5c69-7nwsf\" (UID: \"54c845ff-89ff-445b-9c32-19dae23f02f5\") " pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:52.436587 kubelet[2497]: I0706 23:52:52.436502 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rsn\" (UniqueName: \"kubernetes.io/projected/e98209af-ab48-432a-83ec-cef900e23c8c-kube-api-access-g7rsn\") pod \"calico-kube-controllers-7c8fd5987-h2chv\" (UID: \"e98209af-ab48-432a-83ec-cef900e23c8c\") " pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" Jul 6 23:52:52.436883 kubelet[2497]: I0706 23:52:52.436833 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-backend-key-pair\") pod \"whisker-6f877578bb-gzfxs\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " pod="calico-system/whisker-6f877578bb-gzfxs" Jul 6 23:52:52.437097 kubelet[2497]: I0706 23:52:52.437029 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-ca-bundle\") pod \"whisker-6f877578bb-gzfxs\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " pod="calico-system/whisker-6f877578bb-gzfxs" Jul 6 23:52:52.437452 kubelet[2497]: I0706 23:52:52.437265 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxpp\" (UniqueName: \"kubernetes.io/projected/54c845ff-89ff-445b-9c32-19dae23f02f5-kube-api-access-qtxpp\") pod \"goldmane-768f4c5c69-7nwsf\" (UID: \"54c845ff-89ff-445b-9c32-19dae23f02f5\") " pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:52.437452 kubelet[2497]: I0706 23:52:52.437293 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98209af-ab48-432a-83ec-cef900e23c8c-tigera-ca-bundle\") pod \"calico-kube-controllers-7c8fd5987-h2chv\" (UID: \"e98209af-ab48-432a-83ec-cef900e23c8c\") " pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" Jul 6 23:52:52.437452 kubelet[2497]: I0706 23:52:52.437314 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjswg\" (UniqueName: \"kubernetes.io/projected/9370f950-40d0-4ae3-b3c3-c5c05feb1803-kube-api-access-bjswg\") pod \"calico-apiserver-6b5bcb5877-jhw4p\" (UID: \"9370f950-40d0-4ae3-b3c3-c5c05feb1803\") " pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" Jul 6 23:52:52.638610 kubelet[2497]: E0706 23:52:52.637310 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:52.652158 containerd[1471]: time="2025-07-06T23:52:52.652069205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fvn5,Uid:0e94334e-f3fd-4a19-bf8e-6d83c5d49a81,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:52.653715 containerd[1471]: time="2025-07-06T23:52:52.653678395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f877578bb-gzfxs,Uid:4c958244-9bad-4fa8-9b01-c7784873e0bf,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:52.668500 containerd[1471]: time="2025-07-06T23:52:52.667926123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-jhw4p,Uid:9370f950-40d0-4ae3-b3c3-c5c05feb1803,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:52:52.671327 containerd[1471]: time="2025-07-06T23:52:52.671010288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8fd5987-h2chv,Uid:e98209af-ab48-432a-83ec-cef900e23c8c,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:52.693288 containerd[1471]: time="2025-07-06T23:52:52.692815717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7nwsf,Uid:54c845ff-89ff-445b-9c32-19dae23f02f5,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:52.709107 containerd[1471]: time="2025-07-06T23:52:52.709051569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-x5v6z,Uid:a024894b-79af-4474-8f3a-f963becd00ab,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:52:52.722688 kubelet[2497]: E0706 23:52:52.722322 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:52.735947 containerd[1471]: time="2025-07-06T23:52:52.735476113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssrs9,Uid:0f170527-172c-4d49-bd6a-2a7a489db328,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:52.825020 containerd[1471]: time="2025-07-06T23:52:52.823730669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:52:53.038208 containerd[1471]: time="2025-07-06T23:52:53.038145938Z" level=error msg="Failed to destroy network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.043449 containerd[1471]: time="2025-07-06T23:52:53.043377637Z" level=error msg="encountered an error cleaning up failed sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.043837 containerd[1471]: time="2025-07-06T23:52:53.043807148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-jhw4p,Uid:9370f950-40d0-4ae3-b3c3-c5c05feb1803,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.045388 kubelet[2497]: E0706 23:52:53.044370 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.045388 kubelet[2497]: E0706 23:52:53.044456 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" Jul 6 23:52:53.045388 kubelet[2497]: E0706 23:52:53.044488 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" Jul 6 23:52:53.045594 kubelet[2497]: E0706 23:52:53.044545 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5bcb5877-jhw4p_calico-apiserver(9370f950-40d0-4ae3-b3c3-c5c05feb1803)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5bcb5877-jhw4p_calico-apiserver(9370f950-40d0-4ae3-b3c3-c5c05feb1803)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" podUID="9370f950-40d0-4ae3-b3c3-c5c05feb1803" Jul 6 23:52:53.047646 containerd[1471]: time="2025-07-06T23:52:53.047579297Z" level=error msg="Failed to destroy network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.048237 containerd[1471]: time="2025-07-06T23:52:53.048204405Z" level=error msg="encountered an error cleaning up failed sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.048407 containerd[1471]: time="2025-07-06T23:52:53.048383479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fvn5,Uid:0e94334e-f3fd-4a19-bf8e-6d83c5d49a81,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.055049 kubelet[2497]: E0706 23:52:53.054119 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.055049 kubelet[2497]: E0706 23:52:53.054180 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fvn5" Jul 6 23:52:53.055049 kubelet[2497]: E0706 23:52:53.054206 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fvn5" Jul 6 23:52:53.055292 kubelet[2497]: E0706 23:52:53.054261 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5fvn5_kube-system(0e94334e-f3fd-4a19-bf8e-6d83c5d49a81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5fvn5_kube-system(0e94334e-f3fd-4a19-bf8e-6d83c5d49a81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fvn5" podUID="0e94334e-f3fd-4a19-bf8e-6d83c5d49a81" Jul 6 23:52:53.057466 containerd[1471]: time="2025-07-06T23:52:53.057394349Z" level=error msg="Failed to destroy network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.059949 containerd[1471]: time="2025-07-06T23:52:53.059896685Z" level=error msg="encountered an error cleaning up failed sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.060395 containerd[1471]: time="2025-07-06T23:52:53.060365616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f877578bb-gzfxs,Uid:4c958244-9bad-4fa8-9b01-c7784873e0bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.062360 kubelet[2497]: E0706 23:52:53.061938 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.062360 kubelet[2497]: E0706 23:52:53.062045 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f877578bb-gzfxs" Jul 6 23:52:53.062360 kubelet[2497]: E0706 23:52:53.062073 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f877578bb-gzfxs" Jul 6 23:52:53.063861 kubelet[2497]: E0706 23:52:53.062122 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f877578bb-gzfxs_calico-system(4c958244-9bad-4fa8-9b01-c7784873e0bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f877578bb-gzfxs_calico-system(4c958244-9bad-4fa8-9b01-c7784873e0bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f877578bb-gzfxs" podUID="4c958244-9bad-4fa8-9b01-c7784873e0bf" Jul 6 23:52:53.066182 containerd[1471]: time="2025-07-06T23:52:53.066138883Z" level=error msg="Failed to destroy network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.066743 containerd[1471]: time="2025-07-06T23:52:53.066703777Z" level=error msg="encountered an error cleaning up failed sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.066918 containerd[1471]: time="2025-07-06T23:52:53.066896143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssrs9,Uid:0f170527-172c-4d49-bd6a-2a7a489db328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.067257 kubelet[2497]: E0706 23:52:53.067215 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.067582 kubelet[2497]: E0706 23:52:53.067555 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ssrs9" Jul 6 23:52:53.067718 kubelet[2497]: E0706 23:52:53.067700 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ssrs9" Jul 6 23:52:53.067890 kubelet[2497]: E0706 23:52:53.067811 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ssrs9_kube-system(0f170527-172c-4d49-bd6a-2a7a489db328)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ssrs9_kube-system(0f170527-172c-4d49-bd6a-2a7a489db328)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ssrs9" podUID="0f170527-172c-4d49-bd6a-2a7a489db328" Jul 6 23:52:53.075604 containerd[1471]: time="2025-07-06T23:52:53.075559201Z" level=error msg="Failed to destroy network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.076092 containerd[1471]: time="2025-07-06T23:52:53.076061041Z" level=error msg="encountered an error cleaning up failed sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.076238 containerd[1471]: time="2025-07-06T23:52:53.076215968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8fd5987-h2chv,Uid:e98209af-ab48-432a-83ec-cef900e23c8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.076654 kubelet[2497]: E0706 23:52:53.076612 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.076882 kubelet[2497]: E0706 23:52:53.076851 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" Jul 6 23:52:53.077000 kubelet[2497]: E0706 23:52:53.076985 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" Jul 6 23:52:53.077162 kubelet[2497]: E0706 23:52:53.077137 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c8fd5987-h2chv_calico-system(e98209af-ab48-432a-83ec-cef900e23c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c8fd5987-h2chv_calico-system(e98209af-ab48-432a-83ec-cef900e23c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" podUID="e98209af-ab48-432a-83ec-cef900e23c8c" Jul 6 23:52:53.090514 containerd[1471]: time="2025-07-06T23:52:53.090432848Z" level=error msg="Failed to destroy network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.091050 containerd[1471]: time="2025-07-06T23:52:53.090883607Z" level=error msg="encountered an error cleaning up failed sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.091050 containerd[1471]: time="2025-07-06T23:52:53.090994071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7nwsf,Uid:54c845ff-89ff-445b-9c32-19dae23f02f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.091717 kubelet[2497]: E0706 23:52:53.091238 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.091717 kubelet[2497]: E0706 23:52:53.091298 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:53.091717 kubelet[2497]: E0706 23:52:53.091320 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7nwsf" Jul 6 23:52:53.091834 kubelet[2497]: E0706 23:52:53.091375 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-7nwsf_calico-system(54c845ff-89ff-445b-9c32-19dae23f02f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-7nwsf_calico-system(54c845ff-89ff-445b-9c32-19dae23f02f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7nwsf" podUID="54c845ff-89ff-445b-9c32-19dae23f02f5" Jul 6 23:52:53.094066 containerd[1471]: time="2025-07-06T23:52:53.093920639Z" level=error msg="Failed to destroy network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.094881 containerd[1471]: time="2025-07-06T23:52:53.094833897Z" level=error msg="encountered an error cleaning up failed sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.095044 containerd[1471]: time="2025-07-06T23:52:53.094914485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-x5v6z,Uid:a024894b-79af-4474-8f3a-f963becd00ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.095388 kubelet[2497]: E0706 23:52:53.095341 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.095466 kubelet[2497]: E0706 23:52:53.095411 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" Jul 6 23:52:53.095466 kubelet[2497]: E0706 23:52:53.095434 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" Jul 6 23:52:53.095528 kubelet[2497]: E0706 23:52:53.095478 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5bcb5877-x5v6z_calico-apiserver(a024894b-79af-4474-8f3a-f963becd00ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5bcb5877-x5v6z_calico-apiserver(a024894b-79af-4474-8f3a-f963becd00ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" podUID="a024894b-79af-4474-8f3a-f963becd00ab" Jul 6 23:52:53.661913 systemd[1]: Created slice kubepods-besteffort-pod4d7dcac2_ec06_4a54_afc7_632e8abadb5b.slice - libcontainer container kubepods-besteffort-pod4d7dcac2_ec06_4a54_afc7_632e8abadb5b.slice. Jul 6 23:52:53.664999 containerd[1471]: time="2025-07-06T23:52:53.664891135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r52x4,Uid:4d7dcac2-ec06-4a54-afc7-632e8abadb5b,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:53.768834 containerd[1471]: time="2025-07-06T23:52:53.763829445Z" level=error msg="Failed to destroy network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.768834 containerd[1471]: time="2025-07-06T23:52:53.766381230Z" level=error msg="encountered an error cleaning up failed sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.768834 containerd[1471]: time="2025-07-06T23:52:53.766497455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r52x4,Uid:4d7dcac2-ec06-4a54-afc7-632e8abadb5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.769231 kubelet[2497]: E0706 23:52:53.766790 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:53.769231 kubelet[2497]: E0706 23:52:53.766874 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:53.769231 kubelet[2497]: E0706 23:52:53.766908 2497 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r52x4" Jul 6 23:52:53.769614 kubelet[2497]: E0706 23:52:53.766987 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r52x4_calico-system(4d7dcac2-ec06-4a54-afc7-632e8abadb5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r52x4_calico-system(4d7dcac2-ec06-4a54-afc7-632e8abadb5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:53.770103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1-shm.mount: Deactivated successfully. Jul 6 23:52:53.826073 kubelet[2497]: I0706 23:52:53.824329 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:52:53.834500 kubelet[2497]: I0706 23:52:53.832806 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:52:53.836040 containerd[1471]: time="2025-07-06T23:52:53.835978751Z" level=info msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" Jul 6 23:52:53.836695 containerd[1471]: time="2025-07-06T23:52:53.836551337Z" level=info msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" Jul 6 23:52:53.837401 containerd[1471]: time="2025-07-06T23:52:53.837267361Z" level=info msg="Ensure that sandbox e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537 in task-service has been cleanup successfully" Jul 6 23:52:53.838221 containerd[1471]: time="2025-07-06T23:52:53.837949735Z" level=info msg="Ensure that sandbox 2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509 in task-service has been cleanup successfully" Jul 6 23:52:53.847809 kubelet[2497]: I0706 23:52:53.846022 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:52:53.848691 containerd[1471]: time="2025-07-06T23:52:53.848571170Z" level=info msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" Jul 6 23:52:53.849429 containerd[1471]: time="2025-07-06T23:52:53.849330347Z" level=info msg="Ensure that sandbox 91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769 in task-service has been cleanup successfully" Jul 6 23:52:53.854104 kubelet[2497]: I0706 23:52:53.852599 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:52:53.855249 containerd[1471]: time="2025-07-06T23:52:53.854891830Z" level=info msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" Jul 6 23:52:53.858488 containerd[1471]: time="2025-07-06T23:52:53.857760950Z" level=info msg="Ensure that sandbox 3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1 in task-service has been cleanup successfully" Jul 6 23:52:53.879229 kubelet[2497]: I0706 23:52:53.876891 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:52:53.881221 containerd[1471]: time="2025-07-06T23:52:53.881182794Z" level=info msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" Jul 6 23:52:53.885699 containerd[1471]: time="2025-07-06T23:52:53.885662900Z" level=info msg="Ensure that sandbox d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f in task-service has been cleanup successfully" Jul 6 23:52:53.887141 kubelet[2497]: I0706 23:52:53.887111 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:52:53.890875 containerd[1471]: time="2025-07-06T23:52:53.889906844Z" level=info msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" Jul 6 23:52:53.897050 kubelet[2497]: I0706 23:52:53.897011 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:52:53.902025 containerd[1471]: time="2025-07-06T23:52:53.901170919Z" level=info msg="Ensure that sandbox cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2 in task-service has been cleanup successfully" Jul 6 23:52:53.907630 containerd[1471]: time="2025-07-06T23:52:53.907045674Z" level=info msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" Jul 6 23:52:53.907630 containerd[1471]: time="2025-07-06T23:52:53.907285729Z" level=info msg="Ensure that sandbox 900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9 in task-service has been cleanup successfully" Jul 6 23:52:53.915849 kubelet[2497]: I0706 23:52:53.915364 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:52:53.918919 containerd[1471]: time="2025-07-06T23:52:53.918692181Z" level=info msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" Jul 6 23:52:53.919498 containerd[1471]: time="2025-07-06T23:52:53.919463065Z" level=info msg="Ensure that sandbox e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3 in task-service has been cleanup successfully" Jul 6 23:52:54.028085 containerd[1471]: time="2025-07-06T23:52:54.028025361Z" level=error msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" failed" error="failed to destroy network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.028863 kubelet[2497]: E0706 23:52:54.028334 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:52:54.028863 kubelet[2497]: E0706 23:52:54.028417 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509"} Jul 6 23:52:54.028863 kubelet[2497]: E0706 23:52:54.028520 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f170527-172c-4d49-bd6a-2a7a489db328\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.028863 kubelet[2497]: E0706 23:52:54.028553 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f170527-172c-4d49-bd6a-2a7a489db328\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ssrs9" podUID="0f170527-172c-4d49-bd6a-2a7a489db328" Jul 6 23:52:54.041277 containerd[1471]: time="2025-07-06T23:52:54.041211500Z" level=error msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" failed" error="failed to destroy network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.042057 kubelet[2497]: E0706 23:52:54.041795 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:52:54.042057 kubelet[2497]: E0706 23:52:54.041868 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769"} Jul 6 23:52:54.042057 kubelet[2497]: E0706 23:52:54.041922 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a024894b-79af-4474-8f3a-f963becd00ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.042057 kubelet[2497]: E0706 23:52:54.041981 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a024894b-79af-4474-8f3a-f963becd00ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" podUID="a024894b-79af-4474-8f3a-f963becd00ab" Jul 6 23:52:54.046095 containerd[1471]: time="2025-07-06T23:52:54.046038480Z" level=error msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" failed" error="failed to destroy network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.046541 kubelet[2497]: E0706 23:52:54.046363 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:52:54.046541 kubelet[2497]: E0706 23:52:54.046412 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537"} Jul 6 23:52:54.046541 kubelet[2497]: E0706 23:52:54.046464 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c958244-9bad-4fa8-9b01-c7784873e0bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.046541 kubelet[2497]: E0706 23:52:54.046491 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c958244-9bad-4fa8-9b01-c7784873e0bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f877578bb-gzfxs" podUID="4c958244-9bad-4fa8-9b01-c7784873e0bf" Jul 6 23:52:54.081099 containerd[1471]: time="2025-07-06T23:52:54.081030180Z" level=error msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" failed" error="failed to destroy network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.082130 kubelet[2497]: E0706 23:52:54.081827 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:52:54.082130 kubelet[2497]: E0706 23:52:54.081907 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f"} Jul 6 23:52:54.082130 kubelet[2497]: E0706 23:52:54.081993 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e98209af-ab48-432a-83ec-cef900e23c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.082130 kubelet[2497]: E0706 23:52:54.082058 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e98209af-ab48-432a-83ec-cef900e23c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" podUID="e98209af-ab48-432a-83ec-cef900e23c8c" Jul 6 23:52:54.086790 containerd[1471]: time="2025-07-06T23:52:54.086730921Z" level=error msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" failed" error="failed to destroy network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.088673 kubelet[2497]: E0706 23:52:54.088590 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:52:54.089266 kubelet[2497]: E0706 23:52:54.089098 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1"} Jul 6 23:52:54.089266 kubelet[2497]: E0706 23:52:54.089176 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.089266 kubelet[2497]: E0706 23:52:54.089216 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d7dcac2-ec06-4a54-afc7-632e8abadb5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r52x4" podUID="4d7dcac2-ec06-4a54-afc7-632e8abadb5b" Jul 6 23:52:54.092741 containerd[1471]: time="2025-07-06T23:52:54.092626473Z" level=error msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" failed" error="failed to destroy network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.093741 kubelet[2497]: E0706 23:52:54.093689 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:52:54.094236 kubelet[2497]: E0706 23:52:54.094193 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9"} Jul 6 23:52:54.094611 kubelet[2497]: E0706 23:52:54.094567 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.095241 kubelet[2497]: E0706 23:52:54.094810 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fvn5" podUID="0e94334e-f3fd-4a19-bf8e-6d83c5d49a81" Jul 6 23:52:54.097175 containerd[1471]: time="2025-07-06T23:52:54.097114960Z" level=error msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" failed" error="failed to destroy network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.098108 kubelet[2497]: E0706 23:52:54.097397 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:52:54.098108 kubelet[2497]: E0706 23:52:54.097454 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2"} Jul 6 23:52:54.098108 kubelet[2497]: E0706 23:52:54.097499 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9370f950-40d0-4ae3-b3c3-c5c05feb1803\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.098108 kubelet[2497]: E0706 23:52:54.097533 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9370f950-40d0-4ae3-b3c3-c5c05feb1803\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" podUID="9370f950-40d0-4ae3-b3c3-c5c05feb1803" Jul 6 23:52:54.099562 containerd[1471]: time="2025-07-06T23:52:54.099511076Z" level=error msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" failed" error="failed to destroy network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:54.100118 kubelet[2497]: E0706 23:52:54.100073 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:52:54.100623 kubelet[2497]: E0706 23:52:54.100489 2497 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3"} Jul 6 23:52:54.100623 kubelet[2497]: E0706 23:52:54.100546 2497 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54c845ff-89ff-445b-9c32-19dae23f02f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:54.100623 kubelet[2497]: E0706 23:52:54.100579 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54c845ff-89ff-445b-9c32-19dae23f02f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7nwsf" podUID="54c845ff-89ff-445b-9c32-19dae23f02f5" Jul 6 23:52:56.119410 kubelet[2497]: I0706 23:52:56.119366 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:52:56.152347 kubelet[2497]: E0706 23:52:56.152170 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:56.925069 kubelet[2497]: E0706 23:52:56.925018 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:59.048772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745882856.mount: Deactivated successfully. Jul 6 23:52:59.202936 containerd[1471]: time="2025-07-06T23:52:59.189396557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:52:59.209945 containerd[1471]: time="2025-07-06T23:52:59.209883467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:59.242214 containerd[1471]: time="2025-07-06T23:52:59.242126140Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:59.253075 containerd[1471]: time="2025-07-06T23:52:59.253005684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:59.257292 containerd[1471]: time="2025-07-06T23:52:59.257196094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.430565267s" Jul 6 23:52:59.257292 containerd[1471]: time="2025-07-06T23:52:59.257258508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:52:59.305807 containerd[1471]: time="2025-07-06T23:52:59.305661055Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:52:59.433524 containerd[1471]: time="2025-07-06T23:52:59.433366979Z" level=info msg="CreateContainer within sandbox \"4e3719762fd4050c8923731ddccfde429ad3167912cd68bc92f50adf131a11e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"86b3e9afb8678d24e71cad33249e18d8fd77a6d49a709db49fe60b16566f5896\"" Jul 6 23:52:59.437793 containerd[1471]: time="2025-07-06T23:52:59.437400155Z" level=info msg="StartContainer for \"86b3e9afb8678d24e71cad33249e18d8fd77a6d49a709db49fe60b16566f5896\"" Jul 6 23:52:59.570842 systemd[1]: Started cri-containerd-86b3e9afb8678d24e71cad33249e18d8fd77a6d49a709db49fe60b16566f5896.scope - libcontainer container 86b3e9afb8678d24e71cad33249e18d8fd77a6d49a709db49fe60b16566f5896. Jul 6 23:52:59.644511 containerd[1471]: time="2025-07-06T23:52:59.644345788Z" level=info msg="StartContainer for \"86b3e9afb8678d24e71cad33249e18d8fd77a6d49a709db49fe60b16566f5896\" returns successfully" Jul 6 23:52:59.808585 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:52:59.808762 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:53:00.051827 kubelet[2497]: I0706 23:53:00.030301 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nxx25" podStartSLOduration=1.480346594 podStartE2EDuration="17.997875825s" podCreationTimestamp="2025-07-06 23:52:42 +0000 UTC" firstStartedPulling="2025-07-06 23:52:42.746787205 +0000 UTC m=+23.247170420" lastFinishedPulling="2025-07-06 23:52:59.264316423 +0000 UTC m=+39.764699651" observedRunningTime="2025-07-06 23:52:59.996587277 +0000 UTC m=+40.496970509" watchObservedRunningTime="2025-07-06 23:52:59.997875825 +0000 UTC m=+40.498259062" Jul 6 23:53:00.132620 containerd[1471]: time="2025-07-06T23:53:00.132378182Z" level=info msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.251 [INFO][3764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.252 [INFO][3764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" iface="eth0" netns="/var/run/netns/cni-cb12919f-a480-5ee6-28e7-8508a41c7dc5" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.254 [INFO][3764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" iface="eth0" netns="/var/run/netns/cni-cb12919f-a480-5ee6-28e7-8508a41c7dc5" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.254 [INFO][3764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" iface="eth0" netns="/var/run/netns/cni-cb12919f-a480-5ee6-28e7-8508a41c7dc5" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.255 [INFO][3764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.255 [INFO][3764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.431 [INFO][3779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.435 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.435 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.446 [WARNING][3779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.446 [INFO][3779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.448 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:00.457725 containerd[1471]: 2025-07-06 23:53:00.451 [INFO][3764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:00.462210 containerd[1471]: time="2025-07-06T23:53:00.459316545Z" level=info msg="TearDown network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" successfully" Jul 6 23:53:00.462210 containerd[1471]: time="2025-07-06T23:53:00.460082182Z" level=info msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" returns successfully" Jul 6 23:53:00.462649 systemd[1]: run-netns-cni\x2dcb12919f\x2da480\x2d5ee6\x2d28e7\x2d8508a41c7dc5.mount: Deactivated successfully. Jul 6 23:53:00.637504 kubelet[2497]: I0706 23:53:00.637443 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-ca-bundle\") pod \"4c958244-9bad-4fa8-9b01-c7784873e0bf\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " Jul 6 23:53:00.637504 kubelet[2497]: I0706 23:53:00.637513 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp78l\" (UniqueName: \"kubernetes.io/projected/4c958244-9bad-4fa8-9b01-c7784873e0bf-kube-api-access-fp78l\") pod \"4c958244-9bad-4fa8-9b01-c7784873e0bf\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " Jul 6 23:53:00.637716 kubelet[2497]: I0706 23:53:00.637538 2497 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-backend-key-pair\") pod \"4c958244-9bad-4fa8-9b01-c7784873e0bf\" (UID: \"4c958244-9bad-4fa8-9b01-c7784873e0bf\") " Jul 6 23:53:00.661869 kubelet[2497]: I0706 23:53:00.660078 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4c958244-9bad-4fa8-9b01-c7784873e0bf" (UID: "4c958244-9bad-4fa8-9b01-c7784873e0bf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:53:00.667215 kubelet[2497]: I0706 23:53:00.667127 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c958244-9bad-4fa8-9b01-c7784873e0bf-kube-api-access-fp78l" (OuterVolumeSpecName: "kube-api-access-fp78l") pod "4c958244-9bad-4fa8-9b01-c7784873e0bf" (UID: "4c958244-9bad-4fa8-9b01-c7784873e0bf"). InnerVolumeSpecName "kube-api-access-fp78l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:53:00.668269 kubelet[2497]: I0706 23:53:00.667187 2497 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4c958244-9bad-4fa8-9b01-c7784873e0bf" (UID: "4c958244-9bad-4fa8-9b01-c7784873e0bf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:53:00.667704 systemd[1]: var-lib-kubelet-pods-4c958244\x2d9bad\x2d4fa8\x2d9b01\x2dc7784873e0bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfp78l.mount: Deactivated successfully. Jul 6 23:53:00.667820 systemd[1]: var-lib-kubelet-pods-4c958244\x2d9bad\x2d4fa8\x2d9b01\x2dc7784873e0bf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:53:00.740745 kubelet[2497]: I0706 23:53:00.739873 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fp78l\" (UniqueName: \"kubernetes.io/projected/4c958244-9bad-4fa8-9b01-c7784873e0bf-kube-api-access-fp78l\") on node \"ci-4081.3.4-c-43d64a8ca6\" DevicePath \"\"" Jul 6 23:53:00.740745 kubelet[2497]: I0706 23:53:00.739956 2497 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-backend-key-pair\") on node \"ci-4081.3.4-c-43d64a8ca6\" DevicePath \"\"" Jul 6 23:53:00.740745 kubelet[2497]: I0706 23:53:00.740039 2497 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c958244-9bad-4fa8-9b01-c7784873e0bf-whisker-ca-bundle\") on node \"ci-4081.3.4-c-43d64a8ca6\" DevicePath \"\"" Jul 6 23:53:00.961339 systemd[1]: Removed slice kubepods-besteffort-pod4c958244_9bad_4fa8_9b01_c7784873e0bf.slice - libcontainer container kubepods-besteffort-pod4c958244_9bad_4fa8_9b01_c7784873e0bf.slice. Jul 6 23:53:01.103136 systemd[1]: Created slice kubepods-besteffort-pod85ece6a3_b701_4902_a23e_c58dd23821ec.slice - libcontainer container kubepods-besteffort-pod85ece6a3_b701_4902_a23e_c58dd23821ec.slice. Jul 6 23:53:01.244766 kubelet[2497]: I0706 23:53:01.244600 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85ece6a3-b701-4902-a23e-c58dd23821ec-whisker-backend-key-pair\") pod \"whisker-7d5b7b97c4-9fnsx\" (UID: \"85ece6a3-b701-4902-a23e-c58dd23821ec\") " pod="calico-system/whisker-7d5b7b97c4-9fnsx" Jul 6 23:53:01.244766 kubelet[2497]: I0706 23:53:01.244677 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62k52\" (UniqueName: \"kubernetes.io/projected/85ece6a3-b701-4902-a23e-c58dd23821ec-kube-api-access-62k52\") pod \"whisker-7d5b7b97c4-9fnsx\" (UID: \"85ece6a3-b701-4902-a23e-c58dd23821ec\") " pod="calico-system/whisker-7d5b7b97c4-9fnsx" Jul 6 23:53:01.244766 kubelet[2497]: I0706 23:53:01.244737 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ece6a3-b701-4902-a23e-c58dd23821ec-whisker-ca-bundle\") pod \"whisker-7d5b7b97c4-9fnsx\" (UID: \"85ece6a3-b701-4902-a23e-c58dd23821ec\") " pod="calico-system/whisker-7d5b7b97c4-9fnsx" Jul 6 23:53:01.407776 containerd[1471]: time="2025-07-06T23:53:01.407614648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d5b7b97c4-9fnsx,Uid:85ece6a3-b701-4902-a23e-c58dd23821ec,Namespace:calico-system,Attempt:0,}" Jul 6 23:53:01.587826 systemd-networkd[1371]: calia3fd0f809d0: Link UP Jul 6 23:53:01.588189 systemd-networkd[1371]: calia3fd0f809d0: Gained carrier Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.458 [INFO][3827] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.473 [INFO][3827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0 whisker-7d5b7b97c4- calico-system 85ece6a3-b701-4902-a23e-c58dd23821ec 918 0 2025-07-06 23:53:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d5b7b97c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 whisker-7d5b7b97c4-9fnsx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia3fd0f809d0 [] [] }} ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.473 [INFO][3827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.508 [INFO][3838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" HandleID="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.509 [INFO][3838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" HandleID="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"whisker-7d5b7b97c4-9fnsx", "timestamp":"2025-07-06 23:53:01.50890447 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.509 [INFO][3838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.509 [INFO][3838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.509 [INFO][3838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.518 [INFO][3838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.531 [INFO][3838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.537 [INFO][3838] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.541 [INFO][3838] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.545 [INFO][3838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.545 [INFO][3838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.547 [INFO][3838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933 Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.552 [INFO][3838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.560 [INFO][3838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.65/26] block=192.168.120.64/26 handle="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.560 [INFO][3838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.65/26] handle="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.560 [INFO][3838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:01.610883 containerd[1471]: 2025-07-06 23:53:01.560 [INFO][3838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.65/26] IPv6=[] ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" HandleID="k8s-pod-network.4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.565 [INFO][3827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0", GenerateName:"whisker-7d5b7b97c4-", Namespace:"calico-system", SelfLink:"", UID:"85ece6a3-b701-4902-a23e-c58dd23821ec", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d5b7b97c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"whisker-7d5b7b97c4-9fnsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3fd0f809d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.565 [INFO][3827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.65/32] ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.565 [INFO][3827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3fd0f809d0 ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.589 [INFO][3827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.589 [INFO][3827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0", GenerateName:"whisker-7d5b7b97c4-", Namespace:"calico-system", SelfLink:"", UID:"85ece6a3-b701-4902-a23e-c58dd23821ec", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d5b7b97c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933", Pod:"whisker-7d5b7b97c4-9fnsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3fd0f809d0", MAC:"1a:45:84:d5:63:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:01.615290 containerd[1471]: 2025-07-06 23:53:01.605 [INFO][3827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933" Namespace="calico-system" Pod="whisker-7d5b7b97c4-9fnsx" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--7d5b7b97c4--9fnsx-eth0" Jul 6 23:53:01.655626 kubelet[2497]: I0706 23:53:01.655499 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c958244-9bad-4fa8-9b01-c7784873e0bf" path="/var/lib/kubelet/pods/4c958244-9bad-4fa8-9b01-c7784873e0bf/volumes" Jul 6 23:53:01.669340 containerd[1471]: time="2025-07-06T23:53:01.669168744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:01.669340 containerd[1471]: time="2025-07-06T23:53:01.669248985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:01.669340 containerd[1471]: time="2025-07-06T23:53:01.669264218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:01.669565 containerd[1471]: time="2025-07-06T23:53:01.669368543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:01.698219 systemd[1]: Started cri-containerd-4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933.scope - libcontainer container 4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933. Jul 6 23:53:01.816742 containerd[1471]: time="2025-07-06T23:53:01.816681820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d5b7b97c4-9fnsx,Uid:85ece6a3-b701-4902-a23e-c58dd23821ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933\"" Jul 6 23:53:01.825432 containerd[1471]: time="2025-07-06T23:53:01.822863234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:53:02.402032 kernel: bpftool[4035]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:53:02.758157 systemd-networkd[1371]: vxlan.calico: Link UP Jul 6 23:53:02.758164 systemd-networkd[1371]: vxlan.calico: Gained carrier Jul 6 23:53:03.193407 containerd[1471]: time="2025-07-06T23:53:03.193361802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:03.196405 containerd[1471]: time="2025-07-06T23:53:03.196349940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:53:03.198186 containerd[1471]: time="2025-07-06T23:53:03.197898633Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:03.206385 containerd[1471]: time="2025-07-06T23:53:03.206196236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:03.208551 containerd[1471]: time="2025-07-06T23:53:03.208103812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.385056445s" Jul 6 23:53:03.208551 containerd[1471]: time="2025-07-06T23:53:03.208149000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:53:03.212300 containerd[1471]: time="2025-07-06T23:53:03.211989974Z" level=info msg="CreateContainer within sandbox \"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:53:03.224945 containerd[1471]: time="2025-07-06T23:53:03.224896549Z" level=info msg="CreateContainer within sandbox \"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa\"" Jul 6 23:53:03.226877 containerd[1471]: time="2025-07-06T23:53:03.226845896Z" level=info msg="StartContainer for \"db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa\"" Jul 6 23:53:03.272115 systemd[1]: run-containerd-runc-k8s.io-db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa-runc.zWENdq.mount: Deactivated successfully. Jul 6 23:53:03.281243 systemd[1]: Started cri-containerd-db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa.scope - libcontainer container db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa. Jul 6 23:53:03.334605 containerd[1471]: time="2025-07-06T23:53:03.334450172Z" level=info msg="StartContainer for \"db4106a41d2586790323b74e8e201ec09c199cc4fe2269c1cfd3bb9f749e69aa\" returns successfully" Jul 6 23:53:03.338271 containerd[1471]: time="2025-07-06T23:53:03.338180929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:53:03.400688 systemd-networkd[1371]: calia3fd0f809d0: Gained IPv6LL Jul 6 23:53:04.552871 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jul 6 23:53:04.662220 containerd[1471]: time="2025-07-06T23:53:04.661841461Z" level=info msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.774 [INFO][4157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.775 [INFO][4157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" iface="eth0" netns="/var/run/netns/cni-386fadeb-d30b-c6a3-e339-fdda81519cad" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.775 [INFO][4157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" iface="eth0" netns="/var/run/netns/cni-386fadeb-d30b-c6a3-e339-fdda81519cad" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.776 [INFO][4157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" iface="eth0" netns="/var/run/netns/cni-386fadeb-d30b-c6a3-e339-fdda81519cad" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.776 [INFO][4157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.776 [INFO][4157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.810 [INFO][4165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.810 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.810 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.819 [WARNING][4165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.819 [INFO][4165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.821 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:04.831252 containerd[1471]: 2025-07-06 23:53:04.824 [INFO][4157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:04.839901 containerd[1471]: time="2025-07-06T23:53:04.832510870Z" level=info msg="TearDown network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" successfully" Jul 6 23:53:04.839901 containerd[1471]: time="2025-07-06T23:53:04.832550820Z" level=info msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" returns successfully" Jul 6 23:53:04.839901 containerd[1471]: time="2025-07-06T23:53:04.838551983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fvn5,Uid:0e94334e-f3fd-4a19-bf8e-6d83c5d49a81,Namespace:kube-system,Attempt:1,}" Jul 6 23:53:04.840229 kubelet[2497]: E0706 23:53:04.836093 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:04.840718 systemd[1]: run-netns-cni\x2d386fadeb\x2dd30b\x2dc6a3\x2de339\x2dfdda81519cad.mount: Deactivated successfully. Jul 6 23:53:05.035480 systemd-networkd[1371]: calif3d2abf1e8c: Link UP Jul 6 23:53:05.040248 systemd-networkd[1371]: calif3d2abf1e8c: Gained carrier Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.919 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0 coredns-668d6bf9bc- kube-system 0e94334e-f3fd-4a19-bf8e-6d83c5d49a81 934 0 2025-07-06 23:52:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 coredns-668d6bf9bc-5fvn5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3d2abf1e8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.919 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.959 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" HandleID="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.959 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" HandleID="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"coredns-668d6bf9bc-5fvn5", "timestamp":"2025-07-06 23:53:04.95934958 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.959 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.959 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.959 [INFO][4188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.975 [INFO][4188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.983 [INFO][4188] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.990 [INFO][4188] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.993 [INFO][4188] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:04.999 [INFO][4188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.000 [INFO][4188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.002 [INFO][4188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58 Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.010 [INFO][4188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.020 [INFO][4188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.66/26] block=192.168.120.64/26 handle="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.020 [INFO][4188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.66/26] handle="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.020 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:05.066159 containerd[1471]: 2025-07-06 23:53:05.020 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.66/26] IPv6=[] ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" HandleID="k8s-pod-network.111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.023 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"coredns-668d6bf9bc-5fvn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d2abf1e8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.023 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.66/32] ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.023 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3d2abf1e8c ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.042 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.042 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58", Pod:"coredns-668d6bf9bc-5fvn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d2abf1e8c", MAC:"52:71:cc:95:e9:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:05.067432 containerd[1471]: 2025-07-06 23:53:05.062 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fvn5" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:05.119830 containerd[1471]: time="2025-07-06T23:53:05.119590597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:05.119830 containerd[1471]: time="2025-07-06T23:53:05.119686497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:05.123282 containerd[1471]: time="2025-07-06T23:53:05.123020655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:05.123543 containerd[1471]: time="2025-07-06T23:53:05.123456068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:05.158689 systemd[1]: Started cri-containerd-111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58.scope - libcontainer container 111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58. Jul 6 23:53:05.217899 containerd[1471]: time="2025-07-06T23:53:05.217476950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fvn5,Uid:0e94334e-f3fd-4a19-bf8e-6d83c5d49a81,Namespace:kube-system,Attempt:1,} returns sandbox id \"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58\"" Jul 6 23:53:05.220734 kubelet[2497]: E0706 23:53:05.220676 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:05.225679 containerd[1471]: time="2025-07-06T23:53:05.225577154Z" level=info msg="CreateContainer within sandbox \"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:53:05.257485 containerd[1471]: time="2025-07-06T23:53:05.257437160Z" level=info msg="CreateContainer within sandbox \"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baaf78e4250e4eb5d0d607ea736c6ef07e69338673193d2ae708661f5b0ff534\"" Jul 6 23:53:05.258812 containerd[1471]: time="2025-07-06T23:53:05.258776331Z" level=info msg="StartContainer for \"baaf78e4250e4eb5d0d607ea736c6ef07e69338673193d2ae708661f5b0ff534\"" Jul 6 23:53:05.305355 systemd[1]: Started cri-containerd-baaf78e4250e4eb5d0d607ea736c6ef07e69338673193d2ae708661f5b0ff534.scope - libcontainer container baaf78e4250e4eb5d0d607ea736c6ef07e69338673193d2ae708661f5b0ff534. Jul 6 23:53:05.357339 containerd[1471]: time="2025-07-06T23:53:05.357301502Z" level=info msg="StartContainer for \"baaf78e4250e4eb5d0d607ea736c6ef07e69338673193d2ae708661f5b0ff534\" returns successfully" Jul 6 23:53:05.654877 containerd[1471]: time="2025-07-06T23:53:05.654822941Z" level=info msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" Jul 6 23:53:05.656820 containerd[1471]: time="2025-07-06T23:53:05.656732169Z" level=info msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.823 [INFO][4301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.823 [INFO][4301] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" iface="eth0" netns="/var/run/netns/cni-ed0e7f38-dc74-b7df-35da-16b2fea4e6bf" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.824 [INFO][4301] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" iface="eth0" netns="/var/run/netns/cni-ed0e7f38-dc74-b7df-35da-16b2fea4e6bf" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.830 [INFO][4301] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" iface="eth0" netns="/var/run/netns/cni-ed0e7f38-dc74-b7df-35da-16b2fea4e6bf" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.830 [INFO][4301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.830 [INFO][4301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.886 [INFO][4321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.886 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.886 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.899 [WARNING][4321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.899 [INFO][4321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.903 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:05.912415 containerd[1471]: 2025-07-06 23:53:05.905 [INFO][4301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:05.913614 containerd[1471]: time="2025-07-06T23:53:05.913099196Z" level=info msg="TearDown network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" successfully" Jul 6 23:53:05.913614 containerd[1471]: time="2025-07-06T23:53:05.913132795Z" level=info msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" returns successfully" Jul 6 23:53:05.918426 systemd[1]: run-netns-cni\x2ded0e7f38\x2ddc74\x2db7df\x2d35da\x2d16b2fea4e6bf.mount: Deactivated successfully. Jul 6 23:53:05.922526 containerd[1471]: time="2025-07-06T23:53:05.922308522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8fd5987-h2chv,Uid:e98209af-ab48-432a-83ec-cef900e23c8c,Namespace:calico-system,Attempt:1,}" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.821 [INFO][4302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.821 [INFO][4302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" iface="eth0" netns="/var/run/netns/cni-af246b2b-4475-f015-9463-9ba88042abf0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.821 [INFO][4302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" iface="eth0" netns="/var/run/netns/cni-af246b2b-4475-f015-9463-9ba88042abf0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.823 [INFO][4302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" iface="eth0" netns="/var/run/netns/cni-af246b2b-4475-f015-9463-9ba88042abf0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.823 [INFO][4302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.823 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.921 [INFO][4316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.921 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.921 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.930 [WARNING][4316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.931 [INFO][4316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.934 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:05.954166 containerd[1471]: 2025-07-06 23:53:05.943 [INFO][4302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:05.958325 containerd[1471]: time="2025-07-06T23:53:05.957911493Z" level=info msg="TearDown network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" successfully" Jul 6 23:53:05.958325 containerd[1471]: time="2025-07-06T23:53:05.958174052Z" level=info msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" returns successfully" Jul 6 23:53:05.960297 systemd[1]: run-netns-cni\x2daf246b2b\x2d4475\x2df015\x2d9463\x2d9ba88042abf0.mount: Deactivated successfully. Jul 6 23:53:05.961837 containerd[1471]: time="2025-07-06T23:53:05.961571162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-jhw4p,Uid:9370f950-40d0-4ae3-b3c3-c5c05feb1803,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:53:05.975802 kubelet[2497]: E0706 23:53:05.975770 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:06.051019 kubelet[2497]: I0706 23:53:06.050204 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5fvn5" podStartSLOduration=41.050100609 podStartE2EDuration="41.050100609s" podCreationTimestamp="2025-07-06 23:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:53:06.005316254 +0000 UTC m=+46.505699491" watchObservedRunningTime="2025-07-06 23:53:06.050100609 +0000 UTC m=+46.550483840" Jul 6 23:53:06.124700 containerd[1471]: time="2025-07-06T23:53:06.124652178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:06.126947 containerd[1471]: time="2025-07-06T23:53:06.126883655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:53:06.128148 containerd[1471]: time="2025-07-06T23:53:06.128064734Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:06.138396 containerd[1471]: time="2025-07-06T23:53:06.138348508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:06.142114 containerd[1471]: time="2025-07-06T23:53:06.141627149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.803381677s" Jul 6 23:53:06.142114 containerd[1471]: time="2025-07-06T23:53:06.141770724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:53:06.149638 containerd[1471]: time="2025-07-06T23:53:06.149580480Z" level=info msg="CreateContainer within sandbox \"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:53:06.152707 systemd-networkd[1371]: calif3d2abf1e8c: Gained IPv6LL Jul 6 23:53:06.175509 containerd[1471]: time="2025-07-06T23:53:06.175448806Z" level=info msg="CreateContainer within sandbox \"4e521124a41671633a9a42a2495975888a4fdbe385b5adf72f4d668053c2e933\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a3907c99abb780cf2806488f189d1a8bda0f7aa89bf21b69cd7dce39ad81326d\"" Jul 6 23:53:06.176404 containerd[1471]: time="2025-07-06T23:53:06.176238523Z" level=info msg="StartContainer for \"a3907c99abb780cf2806488f189d1a8bda0f7aa89bf21b69cd7dce39ad81326d\"" Jul 6 23:53:06.232721 systemd[1]: Started cri-containerd-a3907c99abb780cf2806488f189d1a8bda0f7aa89bf21b69cd7dce39ad81326d.scope - libcontainer container a3907c99abb780cf2806488f189d1a8bda0f7aa89bf21b69cd7dce39ad81326d. Jul 6 23:53:06.255296 systemd-networkd[1371]: cali1d2d67c7daa: Link UP Jul 6 23:53:06.258504 systemd-networkd[1371]: cali1d2d67c7daa: Gained carrier Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.056 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0 calico-kube-controllers-7c8fd5987- calico-system e98209af-ab48-432a-83ec-cef900e23c8c 949 0 2025-07-06 23:52:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c8fd5987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 calico-kube-controllers-7c8fd5987-h2chv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1d2d67c7daa [] [] }} ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.056 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.135 [INFO][4354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" HandleID="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.135 [INFO][4354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" HandleID="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"calico-kube-controllers-7c8fd5987-h2chv", "timestamp":"2025-07-06 23:53:06.13533658 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.136 [INFO][4354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.136 [INFO][4354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.136 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.158 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.179 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.195 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.203 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.210 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.210 [INFO][4354] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.214 [INFO][4354] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062 Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.224 [INFO][4354] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.240 [INFO][4354] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.67/26] block=192.168.120.64/26 handle="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.240 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.67/26] handle="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.240 [INFO][4354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:06.297267 containerd[1471]: 2025-07-06 23:53:06.240 [INFO][4354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.67/26] IPv6=[] ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" HandleID="k8s-pod-network.fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.246 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0", GenerateName:"calico-kube-controllers-7c8fd5987-", Namespace:"calico-system", SelfLink:"", UID:"e98209af-ab48-432a-83ec-cef900e23c8c", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8fd5987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"calico-kube-controllers-7c8fd5987-h2chv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d2d67c7daa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.246 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.67/32] ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.246 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d2d67c7daa ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.259 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.259 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0", GenerateName:"calico-kube-controllers-7c8fd5987-", Namespace:"calico-system", SelfLink:"", UID:"e98209af-ab48-432a-83ec-cef900e23c8c", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8fd5987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062", Pod:"calico-kube-controllers-7c8fd5987-h2chv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d2d67c7daa", MAC:"ca:ff:c0:28:aa:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:06.299384 containerd[1471]: 2025-07-06 23:53:06.287 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062" Namespace="calico-system" Pod="calico-kube-controllers-7c8fd5987-h2chv" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:06.374222 systemd-networkd[1371]: cali2f6424b5c2d: Link UP Jul 6 23:53:06.378108 containerd[1471]: time="2025-07-06T23:53:06.371743840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:06.378108 containerd[1471]: time="2025-07-06T23:53:06.377916244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:06.378396 containerd[1471]: time="2025-07-06T23:53:06.378335418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:06.380997 containerd[1471]: time="2025-07-06T23:53:06.380225084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:06.386138 systemd-networkd[1371]: cali2f6424b5c2d: Gained carrier Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.174 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0 calico-apiserver-6b5bcb5877- calico-apiserver 9370f950-40d0-4ae3-b3c3-c5c05feb1803 948 0 2025-07-06 23:52:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5bcb5877 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 calico-apiserver-6b5bcb5877-jhw4p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f6424b5c2d [] [] }} ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.176 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.281 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" HandleID="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.282 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" HandleID="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"calico-apiserver-6b5bcb5877-jhw4p", "timestamp":"2025-07-06 23:53:06.281141139 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.282 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.282 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.282 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.295 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.306 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.316 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.320 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.327 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.327 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.332 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.342 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.357 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.68/26] block=192.168.120.64/26 handle="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.357 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.68/26] handle="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.357 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:06.410471 containerd[1471]: 2025-07-06 23:53:06.357 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.68/26] IPv6=[] ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" HandleID="k8s-pod-network.28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.364 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"9370f950-40d0-4ae3-b3c3-c5c05feb1803", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"calico-apiserver-6b5bcb5877-jhw4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f6424b5c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.367 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.68/32] ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.367 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f6424b5c2d ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.385 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.385 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"9370f950-40d0-4ae3-b3c3-c5c05feb1803", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc", Pod:"calico-apiserver-6b5bcb5877-jhw4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f6424b5c2d", MAC:"66:88:d4:2f:3b:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:06.411454 containerd[1471]: 2025-07-06 23:53:06.406 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-jhw4p" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:06.428268 systemd[1]: Started cri-containerd-fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062.scope - libcontainer container fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062. Jul 6 23:53:06.456219 containerd[1471]: time="2025-07-06T23:53:06.455921590Z" level=info msg="StartContainer for \"a3907c99abb780cf2806488f189d1a8bda0f7aa89bf21b69cd7dce39ad81326d\" returns successfully" Jul 6 23:53:06.471077 containerd[1471]: time="2025-07-06T23:53:06.470872800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:06.471360 containerd[1471]: time="2025-07-06T23:53:06.471059047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:06.471360 containerd[1471]: time="2025-07-06T23:53:06.471089049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:06.472818 containerd[1471]: time="2025-07-06T23:53:06.472148823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:06.505190 systemd[1]: Started cri-containerd-28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc.scope - libcontainer container 28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc. Jul 6 23:53:06.555748 containerd[1471]: time="2025-07-06T23:53:06.555706424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8fd5987-h2chv,Uid:e98209af-ab48-432a-83ec-cef900e23c8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062\"" Jul 6 23:53:06.561454 containerd[1471]: time="2025-07-06T23:53:06.560197926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:53:06.601097 containerd[1471]: time="2025-07-06T23:53:06.601053415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-jhw4p,Uid:9370f950-40d0-4ae3-b3c3-c5c05feb1803,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc\"" Jul 6 23:53:06.653638 containerd[1471]: time="2025-07-06T23:53:06.653577552Z" level=info msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" Jul 6 23:53:06.654274 containerd[1471]: time="2025-07-06T23:53:06.654207976Z" level=info msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" Jul 6 23:53:06.656102 containerd[1471]: time="2025-07-06T23:53:06.655179158Z" level=info msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" Jul 6 23:53:06.839015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763079479.mount: Deactivated successfully. Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.770 [INFO][4543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.771 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" iface="eth0" netns="/var/run/netns/cni-a0164f2c-eb9a-e8cb-5361-4b914fa5cb8c" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.771 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" iface="eth0" netns="/var/run/netns/cni-a0164f2c-eb9a-e8cb-5361-4b914fa5cb8c" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.772 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" iface="eth0" netns="/var/run/netns/cni-a0164f2c-eb9a-e8cb-5361-4b914fa5cb8c" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.772 [INFO][4543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.772 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.824 [INFO][4561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.824 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.825 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.842 [WARNING][4561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.842 [INFO][4561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.847 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:06.855850 containerd[1471]: 2025-07-06 23:53:06.853 [INFO][4543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:06.858749 containerd[1471]: time="2025-07-06T23:53:06.856001891Z" level=info msg="TearDown network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" successfully" Jul 6 23:53:06.858749 containerd[1471]: time="2025-07-06T23:53:06.856062673Z" level=info msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" returns successfully" Jul 6 23:53:06.858749 containerd[1471]: time="2025-07-06T23:53:06.858635787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r52x4,Uid:4d7dcac2-ec06-4a54-afc7-632e8abadb5b,Namespace:calico-system,Attempt:1,}" Jul 6 23:53:06.861432 systemd[1]: run-netns-cni\x2da0164f2c\x2deb9a\x2de8cb\x2d5361\x2d4b914fa5cb8c.mount: Deactivated successfully. Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.774 [INFO][4539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.775 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" iface="eth0" netns="/var/run/netns/cni-8b420b1b-7437-81dc-34a5-23951f92e092" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.777 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" iface="eth0" netns="/var/run/netns/cni-8b420b1b-7437-81dc-34a5-23951f92e092" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.780 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" iface="eth0" netns="/var/run/netns/cni-8b420b1b-7437-81dc-34a5-23951f92e092" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.781 [INFO][4539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.781 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.863 [INFO][4567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.864 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.864 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.873 [WARNING][4567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.873 [INFO][4567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.878 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:06.893857 containerd[1471]: 2025-07-06 23:53:06.885 [INFO][4539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:06.897455 containerd[1471]: time="2025-07-06T23:53:06.897056863Z" level=info msg="TearDown network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" successfully" Jul 6 23:53:06.897455 containerd[1471]: time="2025-07-06T23:53:06.897105875Z" level=info msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" returns successfully" Jul 6 23:53:06.899477 systemd[1]: run-netns-cni\x2d8b420b1b\x2d7437\x2d81dc\x2d34a5\x2d23951f92e092.mount: Deactivated successfully. Jul 6 23:53:06.904138 containerd[1471]: time="2025-07-06T23:53:06.904034366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-x5v6z,Uid:a024894b-79af-4474-8f3a-f963becd00ab,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.780 [INFO][4538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.781 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" iface="eth0" netns="/var/run/netns/cni-62544d30-654f-10f4-508f-2fb17e834b3b" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.781 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" iface="eth0" netns="/var/run/netns/cni-62544d30-654f-10f4-508f-2fb17e834b3b" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.782 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" iface="eth0" netns="/var/run/netns/cni-62544d30-654f-10f4-508f-2fb17e834b3b" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.782 [INFO][4538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.782 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.865 [INFO][4569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.866 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.879 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.901 [WARNING][4569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.901 [INFO][4569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.910 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:06.924689 containerd[1471]: 2025-07-06 23:53:06.914 [INFO][4538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:06.925715 containerd[1471]: time="2025-07-06T23:53:06.925025795Z" level=info msg="TearDown network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" successfully" Jul 6 23:53:06.925715 containerd[1471]: time="2025-07-06T23:53:06.925054526Z" level=info msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" returns successfully" Jul 6 23:53:06.926857 containerd[1471]: time="2025-07-06T23:53:06.926507574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7nwsf,Uid:54c845ff-89ff-445b-9c32-19dae23f02f5,Namespace:calico-system,Attempt:1,}" Jul 6 23:53:06.998930 kubelet[2497]: E0706 23:53:06.996804 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:07.135868 systemd-networkd[1371]: cali192042aa21c: Link UP Jul 6 23:53:07.137519 systemd-networkd[1371]: cali192042aa21c: Gained carrier Jul 6 23:53:07.163699 kubelet[2497]: I0706 23:53:07.163534 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d5b7b97c4-9fnsx" podStartSLOduration=1.841627993 podStartE2EDuration="6.163510263s" podCreationTimestamp="2025-07-06 23:53:01 +0000 UTC" firstStartedPulling="2025-07-06 23:53:01.821627661 +0000 UTC m=+42.322010877" lastFinishedPulling="2025-07-06 23:53:06.143509932 +0000 UTC m=+46.643893147" observedRunningTime="2025-07-06 23:53:07.023635396 +0000 UTC m=+47.524018656" watchObservedRunningTime="2025-07-06 23:53:07.163510263 +0000 UTC m=+47.663893547" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:06.984 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0 calico-apiserver-6b5bcb5877- calico-apiserver a024894b-79af-4474-8f3a-f963becd00ab 974 0 2025-07-06 23:52:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5bcb5877 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 calico-apiserver-6b5bcb5877-x5v6z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali192042aa21c [] [] }} ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:06.985 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.056 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" HandleID="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.057 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" HandleID="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"calico-apiserver-6b5bcb5877-x5v6z", "timestamp":"2025-07-06 23:53:07.056217235 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.057 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.057 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.057 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.068 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.077 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.089 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.095 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.101 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.101 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.104 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9 Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.114 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.127 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.69/26] block=192.168.120.64/26 handle="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.127 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.69/26] handle="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.127 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:07.170994 containerd[1471]: 2025-07-06 23:53:07.127 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.69/26] IPv6=[] ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" HandleID="k8s-pod-network.455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.130 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"a024894b-79af-4474-8f3a-f963becd00ab", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"calico-apiserver-6b5bcb5877-x5v6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali192042aa21c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.131 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.69/32] ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.131 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali192042aa21c ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.138 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.139 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"a024894b-79af-4474-8f3a-f963becd00ab", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9", Pod:"calico-apiserver-6b5bcb5877-x5v6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali192042aa21c", MAC:"9a:c4:e5:2c:7b:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.173277 containerd[1471]: 2025-07-06 23:53:07.164 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9" Namespace="calico-apiserver" Pod="calico-apiserver-6b5bcb5877-x5v6z" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:07.216787 containerd[1471]: time="2025-07-06T23:53:07.216029228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:07.217161 containerd[1471]: time="2025-07-06T23:53:07.216441595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:07.217833 containerd[1471]: time="2025-07-06T23:53:07.217281696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.217833 containerd[1471]: time="2025-07-06T23:53:07.217545227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.255268 systemd[1]: Started cri-containerd-455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9.scope - libcontainer container 455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9. Jul 6 23:53:07.271351 systemd-networkd[1371]: calid627b1c498a: Link UP Jul 6 23:53:07.279210 systemd-networkd[1371]: calid627b1c498a: Gained carrier Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:06.985 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0 csi-node-driver- calico-system 4d7dcac2-ec06-4a54-afc7-632e8abadb5b 973 0 2025-07-06 23:52:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 csi-node-driver-r52x4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid627b1c498a [] [] }} ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:06.985 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.113 [INFO][4621] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" HandleID="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.114 [INFO][4621] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" HandleID="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001236a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"csi-node-driver-r52x4", "timestamp":"2025-07-06 23:53:07.113392939 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.114 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.127 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.128 [INFO][4621] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.170 [INFO][4621] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.183 [INFO][4621] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.195 [INFO][4621] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.200 [INFO][4621] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.211 [INFO][4621] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.211 [INFO][4621] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.217 [INFO][4621] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97 Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.228 [INFO][4621] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.238 [INFO][4621] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.70/26] block=192.168.120.64/26 handle="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.238 [INFO][4621] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.70/26] handle="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.239 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:07.324375 containerd[1471]: 2025-07-06 23:53:07.240 [INFO][4621] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.70/26] IPv6=[] ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" HandleID="k8s-pod-network.f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.245 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d7dcac2-ec06-4a54-afc7-632e8abadb5b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"csi-node-driver-r52x4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid627b1c498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.246 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.70/32] ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.246 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid627b1c498a ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.283 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.290 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d7dcac2-ec06-4a54-afc7-632e8abadb5b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97", Pod:"csi-node-driver-r52x4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid627b1c498a", MAC:"f2:9b:b4:bb:31:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.326552 containerd[1471]: 2025-07-06 23:53:07.316 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97" Namespace="calico-system" Pod="csi-node-driver-r52x4" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:07.381043 containerd[1471]: time="2025-07-06T23:53:07.377869169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:07.381043 containerd[1471]: time="2025-07-06T23:53:07.377940332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:07.381043 containerd[1471]: time="2025-07-06T23:53:07.377952218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.381043 containerd[1471]: time="2025-07-06T23:53:07.378071767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.383860 systemd-networkd[1371]: calicf3e090183b: Link UP Jul 6 23:53:07.388033 systemd-networkd[1371]: calicf3e090183b: Gained carrier Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.087 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0 goldmane-768f4c5c69- calico-system 54c845ff-89ff-445b-9c32-19dae23f02f5 975 0 2025-07-06 23:52:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 goldmane-768f4c5c69-7nwsf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicf3e090183b [] [] }} ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.091 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.175 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" HandleID="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.176 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" HandleID="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"goldmane-768f4c5c69-7nwsf", "timestamp":"2025-07-06 23:53:07.175641967 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.176 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.239 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.239 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.282 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.305 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.313 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.322 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.327 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.327 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.335 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323 Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.341 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.354 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.71/26] block=192.168.120.64/26 handle="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.357 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.71/26] handle="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.357 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:07.417398 containerd[1471]: 2025-07-06 23:53:07.357 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.71/26] IPv6=[] ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" HandleID="k8s-pod-network.c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.362 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"54c845ff-89ff-445b-9c32-19dae23f02f5", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"goldmane-768f4c5c69-7nwsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf3e090183b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.362 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.71/32] ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.362 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf3e090183b ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.388 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.388 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"54c845ff-89ff-445b-9c32-19dae23f02f5", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323", Pod:"goldmane-768f4c5c69-7nwsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf3e090183b", MAC:"16:9e:03:a1:05:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.418889 containerd[1471]: 2025-07-06 23:53:07.408 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323" Namespace="calico-system" Pod="goldmane-768f4c5c69-7nwsf" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:07.436124 containerd[1471]: time="2025-07-06T23:53:07.435718594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5bcb5877-x5v6z,Uid:a024894b-79af-4474-8f3a-f963becd00ab,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9\"" Jul 6 23:53:07.454235 systemd[1]: Started cri-containerd-f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97.scope - libcontainer container f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97. Jul 6 23:53:07.475813 containerd[1471]: time="2025-07-06T23:53:07.475497131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:07.475813 containerd[1471]: time="2025-07-06T23:53:07.475572285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:07.475813 containerd[1471]: time="2025-07-06T23:53:07.475587173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.475813 containerd[1471]: time="2025-07-06T23:53:07.475712249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:07.515351 systemd[1]: Started cri-containerd-c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323.scope - libcontainer container c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323. Jul 6 23:53:07.535773 containerd[1471]: time="2025-07-06T23:53:07.535644676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r52x4,Uid:4d7dcac2-ec06-4a54-afc7-632e8abadb5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97\"" Jul 6 23:53:07.578154 containerd[1471]: time="2025-07-06T23:53:07.578034535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7nwsf,Uid:54c845ff-89ff-445b-9c32-19dae23f02f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323\"" Jul 6 23:53:07.654294 containerd[1471]: time="2025-07-06T23:53:07.653899654Z" level=info msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.732 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.733 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" iface="eth0" netns="/var/run/netns/cni-4dbb0f85-31d8-dc69-e36e-47eb0022c792" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.733 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" iface="eth0" netns="/var/run/netns/cni-4dbb0f85-31d8-dc69-e36e-47eb0022c792" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.734 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" iface="eth0" netns="/var/run/netns/cni-4dbb0f85-31d8-dc69-e36e-47eb0022c792" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.734 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.734 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.767 [INFO][4813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.767 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.768 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.776 [WARNING][4813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.776 [INFO][4813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.778 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:07.783998 containerd[1471]: 2025-07-06 23:53:07.781 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:07.784830 containerd[1471]: time="2025-07-06T23:53:07.784076217Z" level=info msg="TearDown network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" successfully" Jul 6 23:53:07.784830 containerd[1471]: time="2025-07-06T23:53:07.784103353Z" level=info msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" returns successfully" Jul 6 23:53:07.785570 kubelet[2497]: E0706 23:53:07.784620 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:07.786502 containerd[1471]: time="2025-07-06T23:53:07.785923388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssrs9,Uid:0f170527-172c-4d49-bd6a-2a7a489db328,Namespace:kube-system,Attempt:1,}" Jul 6 23:53:07.848250 systemd[1]: run-netns-cni\x2d4dbb0f85\x2d31d8\x2ddc69\x2de36e\x2d47eb0022c792.mount: Deactivated successfully. Jul 6 23:53:07.848417 systemd[1]: run-netns-cni\x2d62544d30\x2d654f\x2d10f4\x2d508f\x2d2fb17e834b3b.mount: Deactivated successfully. Jul 6 23:53:07.951614 systemd-networkd[1371]: cali1f1651c6098: Link UP Jul 6 23:53:07.953401 systemd-networkd[1371]: cali1f1651c6098: Gained carrier Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.842 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0 coredns-668d6bf9bc- kube-system 0f170527-172c-4d49-bd6a-2a7a489db328 998 0 2025-07-06 23:52:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-c-43d64a8ca6 coredns-668d6bf9bc-ssrs9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f1651c6098 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.843 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.887 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" HandleID="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.887 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" HandleID="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-c-43d64a8ca6", "pod":"coredns-668d6bf9bc-ssrs9", "timestamp":"2025-07-06 23:53:07.887300993 +0000 UTC"}, Hostname:"ci-4081.3.4-c-43d64a8ca6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.887 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.887 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.887 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-c-43d64a8ca6' Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.899 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.912 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.921 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.925 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.928 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.928 [INFO][4832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.931 [INFO][4832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.936 [INFO][4832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.944 [INFO][4832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.72/26] block=192.168.120.64/26 handle="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.944 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.72/26] handle="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" host="ci-4081.3.4-c-43d64a8ca6" Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.944 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:07.985340 containerd[1471]: 2025-07-06 23:53:07.944 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.72/26] IPv6=[] ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" HandleID="k8s-pod-network.13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.947 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f170527-172c-4d49-bd6a-2a7a489db328", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"", Pod:"coredns-668d6bf9bc-ssrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f1651c6098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.947 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.72/32] ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.947 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f1651c6098 ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.953 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.954 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f170527-172c-4d49-bd6a-2a7a489db328", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb", Pod:"coredns-668d6bf9bc-ssrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f1651c6098", MAC:"92:13:b9:b5:47:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:07.986318 containerd[1471]: 2025-07-06 23:53:07.979 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssrs9" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:08.020076 kubelet[2497]: E0706 23:53:08.020020 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:08.034172 containerd[1471]: time="2025-07-06T23:53:08.034053660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:08.034172 containerd[1471]: time="2025-07-06T23:53:08.034131032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:08.034172 containerd[1471]: time="2025-07-06T23:53:08.034143636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:08.034666 containerd[1471]: time="2025-07-06T23:53:08.034235140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:08.075622 systemd[1]: Started cri-containerd-13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb.scope - libcontainer container 13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb. Jul 6 23:53:08.148404 containerd[1471]: time="2025-07-06T23:53:08.148234156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssrs9,Uid:0f170527-172c-4d49-bd6a-2a7a489db328,Namespace:kube-system,Attempt:1,} returns sandbox id \"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb\"" Jul 6 23:53:08.149377 kubelet[2497]: E0706 23:53:08.149339 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:08.155793 containerd[1471]: time="2025-07-06T23:53:08.155656596Z" level=info msg="CreateContainer within sandbox \"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:53:08.175128 containerd[1471]: time="2025-07-06T23:53:08.174084580Z" level=info msg="CreateContainer within sandbox \"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"991f9d0e028c4d9fcdbe8b0f1f66f0e003525a94fa1f50b2ff798b946e09dba0\"" Jul 6 23:53:08.179984 containerd[1471]: time="2025-07-06T23:53:08.177119300Z" level=info msg="StartContainer for \"991f9d0e028c4d9fcdbe8b0f1f66f0e003525a94fa1f50b2ff798b946e09dba0\"" Jul 6 23:53:08.230188 systemd[1]: Started cri-containerd-991f9d0e028c4d9fcdbe8b0f1f66f0e003525a94fa1f50b2ff798b946e09dba0.scope - libcontainer container 991f9d0e028c4d9fcdbe8b0f1f66f0e003525a94fa1f50b2ff798b946e09dba0. Jul 6 23:53:08.273078 containerd[1471]: time="2025-07-06T23:53:08.272848620Z" level=info msg="StartContainer for \"991f9d0e028c4d9fcdbe8b0f1f66f0e003525a94fa1f50b2ff798b946e09dba0\" returns successfully" Jul 6 23:53:08.328157 systemd-networkd[1371]: cali1d2d67c7daa: Gained IPv6LL Jul 6 23:53:08.393142 systemd-networkd[1371]: cali2f6424b5c2d: Gained IPv6LL Jul 6 23:53:08.457230 systemd-networkd[1371]: calicf3e090183b: Gained IPv6LL Jul 6 23:53:09.027336 kubelet[2497]: E0706 23:53:09.027290 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:09.095690 kubelet[2497]: I0706 23:53:09.095294 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ssrs9" podStartSLOduration=44.094164492 podStartE2EDuration="44.094164492s" podCreationTimestamp="2025-07-06 23:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:53:09.071047957 +0000 UTC m=+49.571431191" watchObservedRunningTime="2025-07-06 23:53:09.094164492 +0000 UTC m=+49.594547726" Jul 6 23:53:09.096323 systemd-networkd[1371]: cali192042aa21c: Gained IPv6LL Jul 6 23:53:09.224123 systemd-networkd[1371]: calid627b1c498a: Gained IPv6LL Jul 6 23:53:09.736119 systemd-networkd[1371]: cali1f1651c6098: Gained IPv6LL Jul 6 23:53:09.971846 containerd[1471]: time="2025-07-06T23:53:09.971055951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:09.973065 containerd[1471]: time="2025-07-06T23:53:09.973005585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:53:09.975247 containerd[1471]: time="2025-07-06T23:53:09.975181230Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:09.978105 containerd[1471]: time="2025-07-06T23:53:09.977843971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:09.979316 containerd[1471]: time="2025-07-06T23:53:09.979258118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.419014645s" Jul 6 23:53:09.979316 containerd[1471]: time="2025-07-06T23:53:09.979303553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:53:09.982204 containerd[1471]: time="2025-07-06T23:53:09.981834832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:53:10.016214 containerd[1471]: time="2025-07-06T23:53:10.015641034Z" level=info msg="CreateContainer within sandbox \"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:53:10.045550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257852963.mount: Deactivated successfully. Jul 6 23:53:10.050886 kubelet[2497]: E0706 23:53:10.050447 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:10.054251 containerd[1471]: time="2025-07-06T23:53:10.052849505Z" level=info msg="CreateContainer within sandbox \"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049\"" Jul 6 23:53:10.054251 containerd[1471]: time="2025-07-06T23:53:10.053446273Z" level=info msg="StartContainer for \"06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049\"" Jul 6 23:53:10.123203 systemd[1]: Started cri-containerd-06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049.scope - libcontainer container 06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049. Jul 6 23:53:10.206388 containerd[1471]: time="2025-07-06T23:53:10.206246096Z" level=info msg="StartContainer for \"06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049\" returns successfully" Jul 6 23:53:11.055236 kubelet[2497]: E0706 23:53:11.054600 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:11.072855 kubelet[2497]: I0706 23:53:11.072188 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c8fd5987-h2chv" podStartSLOduration=25.649433052 podStartE2EDuration="29.072161548s" podCreationTimestamp="2025-07-06 23:52:42 +0000 UTC" firstStartedPulling="2025-07-06 23:53:06.558791128 +0000 UTC m=+47.059174343" lastFinishedPulling="2025-07-06 23:53:09.981519608 +0000 UTC m=+50.481902839" observedRunningTime="2025-07-06 23:53:11.070179896 +0000 UTC m=+51.570563132" watchObservedRunningTime="2025-07-06 23:53:11.072161548 +0000 UTC m=+51.572544778" Jul 6 23:53:11.989939 systemd[1]: Started sshd@8-209.38.68.255:22-139.178.89.65:54934.service - OpenSSH per-connection server daemon (139.178.89.65:54934). Jul 6 23:53:12.198579 systemd[1]: run-containerd-runc-k8s.io-06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049-runc.6eqIHg.mount: Deactivated successfully. Jul 6 23:53:12.222574 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 54934 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:12.228521 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:12.241755 systemd-logind[1445]: New session 8 of user core. Jul 6 23:53:12.247092 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:53:13.017248 sshd[4993]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:13.025497 systemd[1]: sshd@8-209.38.68.255:22-139.178.89.65:54934.service: Deactivated successfully. Jul 6 23:53:13.031935 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:53:13.037750 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:53:13.040636 systemd-logind[1445]: Removed session 8. Jul 6 23:53:13.585165 containerd[1471]: time="2025-07-06T23:53:13.584938691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:13.586467 containerd[1471]: time="2025-07-06T23:53:13.586419331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:53:13.587211 containerd[1471]: time="2025-07-06T23:53:13.587178840Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:13.589499 containerd[1471]: time="2025-07-06T23:53:13.589416341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:13.590443 containerd[1471]: time="2025-07-06T23:53:13.590315152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.608438171s" Jul 6 23:53:13.590443 containerd[1471]: time="2025-07-06T23:53:13.590352312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:53:13.591988 containerd[1471]: time="2025-07-06T23:53:13.591815338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:53:13.595877 containerd[1471]: time="2025-07-06T23:53:13.595761896Z" level=info msg="CreateContainer within sandbox \"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:53:13.612661 containerd[1471]: time="2025-07-06T23:53:13.611878552Z" level=info msg="CreateContainer within sandbox \"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce8a39454dd24e763a86d4d6a68a85fd9380598705419a3ab145d56045a44253\"" Jul 6 23:53:13.615241 containerd[1471]: time="2025-07-06T23:53:13.615194236Z" level=info msg="StartContainer for \"ce8a39454dd24e763a86d4d6a68a85fd9380598705419a3ab145d56045a44253\"" Jul 6 23:53:13.616857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183934489.mount: Deactivated successfully. Jul 6 23:53:13.709376 systemd[1]: Started cri-containerd-ce8a39454dd24e763a86d4d6a68a85fd9380598705419a3ab145d56045a44253.scope - libcontainer container ce8a39454dd24e763a86d4d6a68a85fd9380598705419a3ab145d56045a44253. Jul 6 23:53:13.802426 containerd[1471]: time="2025-07-06T23:53:13.802380179Z" level=info msg="StartContainer for \"ce8a39454dd24e763a86d4d6a68a85fd9380598705419a3ab145d56045a44253\" returns successfully" Jul 6 23:53:13.978532 containerd[1471]: time="2025-07-06T23:53:13.978409364Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:13.979669 containerd[1471]: time="2025-07-06T23:53:13.979600517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:53:13.981856 containerd[1471]: time="2025-07-06T23:53:13.981811758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 389.851939ms" Jul 6 23:53:13.981856 containerd[1471]: time="2025-07-06T23:53:13.981855537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:53:13.985195 containerd[1471]: time="2025-07-06T23:53:13.984511777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:53:13.989466 containerd[1471]: time="2025-07-06T23:53:13.989417509Z" level=info msg="CreateContainer within sandbox \"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:53:14.023215 containerd[1471]: time="2025-07-06T23:53:14.023165629Z" level=info msg="CreateContainer within sandbox \"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7da983bdcbf912c5464a3bb86bc23e9fdcdb4ec0d48b8aa7587f8d27cf96f603\"" Jul 6 23:53:14.026036 containerd[1471]: time="2025-07-06T23:53:14.025240021Z" level=info msg="StartContainer for \"7da983bdcbf912c5464a3bb86bc23e9fdcdb4ec0d48b8aa7587f8d27cf96f603\"" Jul 6 23:53:14.133188 systemd[1]: Started cri-containerd-7da983bdcbf912c5464a3bb86bc23e9fdcdb4ec0d48b8aa7587f8d27cf96f603.scope - libcontainer container 7da983bdcbf912c5464a3bb86bc23e9fdcdb4ec0d48b8aa7587f8d27cf96f603. Jul 6 23:53:14.274653 containerd[1471]: time="2025-07-06T23:53:14.273798348Z" level=info msg="StartContainer for \"7da983bdcbf912c5464a3bb86bc23e9fdcdb4ec0d48b8aa7587f8d27cf96f603\" returns successfully" Jul 6 23:53:15.123431 kubelet[2497]: I0706 23:53:15.123383 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:53:15.148081 kubelet[2497]: I0706 23:53:15.148000 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b5bcb5877-jhw4p" podStartSLOduration=30.159644723 podStartE2EDuration="37.147955929s" podCreationTimestamp="2025-07-06 23:52:38 +0000 UTC" firstStartedPulling="2025-07-06 23:53:06.603261705 +0000 UTC m=+47.103644921" lastFinishedPulling="2025-07-06 23:53:13.5915729 +0000 UTC m=+54.091956127" observedRunningTime="2025-07-06 23:53:14.144699461 +0000 UTC m=+54.645082698" watchObservedRunningTime="2025-07-06 23:53:15.147955929 +0000 UTC m=+55.648339182" Jul 6 23:53:15.149502 kubelet[2497]: I0706 23:53:15.149440 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b5bcb5877-x5v6z" podStartSLOduration=30.609083566 podStartE2EDuration="37.149417856s" podCreationTimestamp="2025-07-06 23:52:38 +0000 UTC" firstStartedPulling="2025-07-06 23:53:07.443740727 +0000 UTC m=+47.944123946" lastFinishedPulling="2025-07-06 23:53:13.984075018 +0000 UTC m=+54.484458236" observedRunningTime="2025-07-06 23:53:15.14912615 +0000 UTC m=+55.649509389" watchObservedRunningTime="2025-07-06 23:53:15.149417856 +0000 UTC m=+55.649801093" Jul 6 23:53:15.589816 containerd[1471]: time="2025-07-06T23:53:15.589757424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:15.591200 containerd[1471]: time="2025-07-06T23:53:15.591143676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:53:15.591594 containerd[1471]: time="2025-07-06T23:53:15.591566305Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:15.609037 containerd[1471]: time="2025-07-06T23:53:15.607739426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:15.609037 containerd[1471]: time="2025-07-06T23:53:15.608819420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.62427042s" Jul 6 23:53:15.609037 containerd[1471]: time="2025-07-06T23:53:15.608866315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:53:15.610339 containerd[1471]: time="2025-07-06T23:53:15.610044275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:53:15.614256 containerd[1471]: time="2025-07-06T23:53:15.613810785Z" level=info msg="CreateContainer within sandbox \"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:53:15.655138 containerd[1471]: time="2025-07-06T23:53:15.655090144Z" level=info msg="CreateContainer within sandbox \"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1322bd9952017248746b6506bb606831171fd9acefc2ac6e80cd1202e730df16\"" Jul 6 23:53:15.660469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927064659.mount: Deactivated successfully. Jul 6 23:53:15.663283 containerd[1471]: time="2025-07-06T23:53:15.658869943Z" level=info msg="StartContainer for \"1322bd9952017248746b6506bb606831171fd9acefc2ac6e80cd1202e730df16\"" Jul 6 23:53:15.744757 systemd[1]: Started cri-containerd-1322bd9952017248746b6506bb606831171fd9acefc2ac6e80cd1202e730df16.scope - libcontainer container 1322bd9952017248746b6506bb606831171fd9acefc2ac6e80cd1202e730df16. Jul 6 23:53:15.812256 containerd[1471]: time="2025-07-06T23:53:15.812205923Z" level=info msg="StartContainer for \"1322bd9952017248746b6506bb606831171fd9acefc2ac6e80cd1202e730df16\" returns successfully" Jul 6 23:53:16.151227 kubelet[2497]: I0706 23:53:16.151064 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:53:18.052896 systemd[1]: Started sshd@9-209.38.68.255:22-139.178.89.65:54950.service - OpenSSH per-connection server daemon (139.178.89.65:54950). Jul 6 23:53:18.223921 sshd[5167]: Accepted publickey for core from 139.178.89.65 port 54950 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:18.227688 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:18.236271 systemd-logind[1445]: New session 9 of user core. Jul 6 23:53:18.243285 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:53:19.030285 sshd[5167]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:19.037312 systemd[1]: sshd@9-209.38.68.255:22-139.178.89.65:54950.service: Deactivated successfully. Jul 6 23:53:19.041957 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:53:19.045845 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:53:19.050194 systemd-logind[1445]: Removed session 9. Jul 6 23:53:19.292499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651003854.mount: Deactivated successfully. Jul 6 23:53:19.936539 containerd[1471]: time="2025-07-06T23:53:19.935594206Z" level=info msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" Jul 6 23:53:20.199075 containerd[1471]: time="2025-07-06T23:53:20.198744210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:20.202278 containerd[1471]: time="2025-07-06T23:53:20.200983905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:53:20.204848 containerd[1471]: time="2025-07-06T23:53:20.204798052Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:20.210187 containerd[1471]: time="2025-07-06T23:53:20.210140008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:20.211236 containerd[1471]: time="2025-07-06T23:53:20.210740507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.600665332s" Jul 6 23:53:20.211236 containerd[1471]: time="2025-07-06T23:53:20.210785336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:53:20.268890 containerd[1471]: time="2025-07-06T23:53:20.268559124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:53:20.431567 containerd[1471]: time="2025-07-06T23:53:20.431463137Z" level=info msg="CreateContainer within sandbox \"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:53:20.514794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574212428.mount: Deactivated successfully. Jul 6 23:53:20.529460 containerd[1471]: time="2025-07-06T23:53:20.528558220Z" level=info msg="CreateContainer within sandbox \"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6\"" Jul 6 23:53:20.588845 containerd[1471]: time="2025-07-06T23:53:20.588088053Z" level=info msg="StartContainer for \"fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6\"" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.252 [WARNING][5200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f170527-172c-4d49-bd6a-2a7a489db328", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb", Pod:"coredns-668d6bf9bc-ssrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f1651c6098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.256 [INFO][5200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.256 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" iface="eth0" netns="" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.256 [INFO][5200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.256 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.596 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.600 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.600 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.620 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.622 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.625 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:20.639958 containerd[1471]: 2025-07-06 23:53:20.635 [INFO][5200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:20.639958 containerd[1471]: time="2025-07-06T23:53:20.639788050Z" level=info msg="TearDown network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" successfully" Jul 6 23:53:20.639958 containerd[1471]: time="2025-07-06T23:53:20.639825175Z" level=info msg="StopPodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" returns successfully" Jul 6 23:53:20.793196 systemd[1]: Started cri-containerd-fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6.scope - libcontainer container fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6. Jul 6 23:53:20.889766 containerd[1471]: time="2025-07-06T23:53:20.889287433Z" level=info msg="RemovePodSandbox for \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" Jul 6 23:53:20.897755 containerd[1471]: time="2025-07-06T23:53:20.897243045Z" level=info msg="Forcibly stopping sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\"" Jul 6 23:53:20.962429 containerd[1471]: time="2025-07-06T23:53:20.962362678Z" level=info msg="StartContainer for \"fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6\" returns successfully" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:20.982 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f170527-172c-4d49-bd6a-2a7a489db328", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"13953aa03e920502fef199ce34875e9486b5ad7c24f54e4b4b65e2a3c33f4bcb", Pod:"coredns-668d6bf9bc-ssrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f1651c6098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:20.983 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:20.984 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" iface="eth0" netns="" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:20.984 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:20.984 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.025 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.025 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.025 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.035 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.035 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" HandleID="k8s-pod-network.2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--ssrs9-eth0" Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.038 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:21.047941 containerd[1471]: 2025-07-06 23:53:21.043 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509" Jul 6 23:53:21.047941 containerd[1471]: time="2025-07-06T23:53:21.047785189Z" level=info msg="TearDown network for sandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" successfully" Jul 6 23:53:21.101513 containerd[1471]: time="2025-07-06T23:53:21.101439671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:21.110939 containerd[1471]: time="2025-07-06T23:53:21.110873315Z" level=info msg="RemovePodSandbox \"2553f80ad6cbc98819b8d628d5173d33964ba323e68ecefd06283dde67a58509\" returns successfully" Jul 6 23:53:21.134955 containerd[1471]: time="2025-07-06T23:53:21.134912158Z" level=info msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.214 [WARNING][5287] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.215 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.215 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" iface="eth0" netns="" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.215 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.215 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.296 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.298 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.299 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.310 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.310 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.314 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:21.323838 containerd[1471]: 2025-07-06 23:53:21.318 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.323838 containerd[1471]: time="2025-07-06T23:53:21.321794443Z" level=info msg="TearDown network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" successfully" Jul 6 23:53:21.323838 containerd[1471]: time="2025-07-06T23:53:21.321828177Z" level=info msg="StopPodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" returns successfully" Jul 6 23:53:21.364443 containerd[1471]: time="2025-07-06T23:53:21.363784520Z" level=info msg="RemovePodSandbox for \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" Jul 6 23:53:21.364694 containerd[1471]: time="2025-07-06T23:53:21.364652549Z" level=info msg="Forcibly stopping sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\"" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.492 [WARNING][5310] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" WorkloadEndpoint="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.494 [INFO][5310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.494 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" iface="eth0" netns="" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.494 [INFO][5310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.494 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.547 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.547 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.547 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.559 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.559 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" HandleID="k8s-pod-network.e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-whisker--6f877578bb--gzfxs-eth0" Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.577 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:21.596452 containerd[1471]: 2025-07-06 23:53:21.592 [INFO][5310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537" Jul 6 23:53:21.598132 containerd[1471]: time="2025-07-06T23:53:21.598087630Z" level=info msg="TearDown network for sandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" successfully" Jul 6 23:53:21.641908 containerd[1471]: time="2025-07-06T23:53:21.641724088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:21.642808 containerd[1471]: time="2025-07-06T23:53:21.642235080Z" level=info msg="RemovePodSandbox \"e2a2cf06bdd7488306291ff399fe86f90ace5208355e40fcc40688020cf63537\" returns successfully" Jul 6 23:53:21.671322 kubelet[2497]: I0706 23:53:21.657673 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-7nwsf" podStartSLOduration=27.970518719 podStartE2EDuration="40.62289488s" podCreationTimestamp="2025-07-06 23:52:41 +0000 UTC" firstStartedPulling="2025-07-06 23:53:07.581863514 +0000 UTC m=+48.082246744" lastFinishedPulling="2025-07-06 23:53:20.234239677 +0000 UTC m=+60.734622905" observedRunningTime="2025-07-06 23:53:21.608292682 +0000 UTC m=+62.108675915" watchObservedRunningTime="2025-07-06 23:53:21.62289488 +0000 UTC m=+62.123278117" Jul 6 23:53:21.675352 containerd[1471]: time="2025-07-06T23:53:21.675302836Z" level=info msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.792 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"54c845ff-89ff-445b-9c32-19dae23f02f5", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323", Pod:"goldmane-768f4c5c69-7nwsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf3e090183b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.792 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.792 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" iface="eth0" netns="" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.792 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.792 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.877 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.879 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.879 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.890 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.892 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.894 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:21.900360 containerd[1471]: 2025-07-06 23:53:21.896 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:21.902526 containerd[1471]: time="2025-07-06T23:53:21.901905112Z" level=info msg="TearDown network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" successfully" Jul 6 23:53:21.902526 containerd[1471]: time="2025-07-06T23:53:21.901937813Z" level=info msg="StopPodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" returns successfully" Jul 6 23:53:21.992217 containerd[1471]: time="2025-07-06T23:53:21.991851194Z" level=info msg="RemovePodSandbox for \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" Jul 6 23:53:21.992217 containerd[1471]: time="2025-07-06T23:53:21.991895393Z" level=info msg="Forcibly stopping sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\"" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.041 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"54c845ff-89ff-445b-9c32-19dae23f02f5", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"c0b26de528e71db699ff97e4fcb421b168046cb7ea442911517fcf1563af7323", Pod:"goldmane-768f4c5c69-7nwsf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf3e090183b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.041 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.041 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" iface="eth0" netns="" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.041 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.041 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.070 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.071 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.071 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.078 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.078 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" HandleID="k8s-pod-network.e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-goldmane--768f4c5c69--7nwsf-eth0" Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.080 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:22.086254 containerd[1471]: 2025-07-06 23:53:22.083 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3" Jul 6 23:53:22.086254 containerd[1471]: time="2025-07-06T23:53:22.085490145Z" level=info msg="TearDown network for sandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" successfully" Jul 6 23:53:22.089061 containerd[1471]: time="2025-07-06T23:53:22.089002326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:22.089667 containerd[1471]: time="2025-07-06T23:53:22.089087708Z" level=info msg="RemovePodSandbox \"e3f332ae257a82b76db714f813487d05108bbcc1050eaf5fb1ac46b61d9626d3\" returns successfully" Jul 6 23:53:22.096028 containerd[1471]: time="2025-07-06T23:53:22.093603519Z" level=info msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.149 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0", GenerateName:"calico-kube-controllers-7c8fd5987-", Namespace:"calico-system", SelfLink:"", UID:"e98209af-ab48-432a-83ec-cef900e23c8c", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8fd5987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062", Pod:"calico-kube-controllers-7c8fd5987-h2chv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d2d67c7daa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.151 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.151 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" iface="eth0" netns="" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.152 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.152 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.186 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.186 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.186 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.194 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.194 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.196 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:22.201997 containerd[1471]: 2025-07-06 23:53:22.199 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.201997 containerd[1471]: time="2025-07-06T23:53:22.201879908Z" level=info msg="TearDown network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" successfully" Jul 6 23:53:22.201997 containerd[1471]: time="2025-07-06T23:53:22.201918257Z" level=info msg="StopPodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" returns successfully" Jul 6 23:53:22.214752 containerd[1471]: time="2025-07-06T23:53:22.214516976Z" level=info msg="RemovePodSandbox for \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" Jul 6 23:53:22.214752 containerd[1471]: time="2025-07-06T23:53:22.214623799Z" level=info msg="Forcibly stopping sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\"" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.269 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0", GenerateName:"calico-kube-controllers-7c8fd5987-", Namespace:"calico-system", SelfLink:"", UID:"e98209af-ab48-432a-83ec-cef900e23c8c", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8fd5987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"fc3826f90d514a9c0f67ff445447ce13311cf6f0c55238b52e5bccaac64b6062", Pod:"calico-kube-controllers-7c8fd5987-h2chv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d2d67c7daa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.270 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.270 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" iface="eth0" netns="" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.270 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.270 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.314 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.315 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.315 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.334 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.334 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" HandleID="k8s-pod-network.d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--kube--controllers--7c8fd5987--h2chv-eth0" Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.339 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:22.350860 containerd[1471]: 2025-07-06 23:53:22.346 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f" Jul 6 23:53:22.350860 containerd[1471]: time="2025-07-06T23:53:22.350707699Z" level=info msg="TearDown network for sandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" successfully" Jul 6 23:53:22.360046 containerd[1471]: time="2025-07-06T23:53:22.358919788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:22.360046 containerd[1471]: time="2025-07-06T23:53:22.359779574Z" level=info msg="RemovePodSandbox \"d22e4ac58215626f397f1a71e488ad8abcd68511df1133098363168ab0d8284f\" returns successfully" Jul 6 23:53:22.394812 containerd[1471]: time="2025-07-06T23:53:22.394194644Z" level=info msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" Jul 6 23:53:22.611721 systemd[1]: run-containerd-runc-k8s.io-fc19c24cadbf9f72db9ad4766d331b1914e5213cf252542c42b80e8bc466b1d6-runc.6uJk4X.mount: Deactivated successfully. Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.566 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"9370f950-40d0-4ae3-b3c3-c5c05feb1803", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc", Pod:"calico-apiserver-6b5bcb5877-jhw4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f6424b5c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.567 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.567 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" iface="eth0" netns="" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.567 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.567 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.635 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.635 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.635 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.658 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.659 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.668 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:22.682126 containerd[1471]: 2025-07-06 23:53:22.674 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.683639 containerd[1471]: time="2025-07-06T23:53:22.683441433Z" level=info msg="TearDown network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" successfully" Jul 6 23:53:22.683902 containerd[1471]: time="2025-07-06T23:53:22.683880259Z" level=info msg="StopPodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" returns successfully" Jul 6 23:53:22.684921 containerd[1471]: time="2025-07-06T23:53:22.684895206Z" level=info msg="RemovePodSandbox for \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" Jul 6 23:53:22.685256 containerd[1471]: time="2025-07-06T23:53:22.685232098Z" level=info msg="Forcibly stopping sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\"" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.796 [WARNING][5463] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"9370f950-40d0-4ae3-b3c3-c5c05feb1803", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"28fd2b27f8ba4b2479947cd43ac1a84c9f83824b359360a1ddf71a5e154a68fc", Pod:"calico-apiserver-6b5bcb5877-jhw4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f6424b5c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.796 [INFO][5463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.796 [INFO][5463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" iface="eth0" netns="" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.796 [INFO][5463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.796 [INFO][5463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.857 [INFO][5474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.860 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.861 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.880 [WARNING][5474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.880 [INFO][5474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" HandleID="k8s-pod-network.cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--jhw4p-eth0" Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.886 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:22.898194 containerd[1471]: 2025-07-06 23:53:22.894 [INFO][5463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2" Jul 6 23:53:22.902097 containerd[1471]: time="2025-07-06T23:53:22.900720948Z" level=info msg="TearDown network for sandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" successfully" Jul 6 23:53:22.911189 containerd[1471]: time="2025-07-06T23:53:22.911101180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:22.911503 containerd[1471]: time="2025-07-06T23:53:22.911470787Z" level=info msg="RemovePodSandbox \"cc33a2787becf587a494d9286167a92f9c93bbe6243028ce7e8266a1814fdef2\" returns successfully" Jul 6 23:53:22.973367 containerd[1471]: time="2025-07-06T23:53:22.973322677Z" level=info msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" Jul 6 23:53:23.057645 containerd[1471]: time="2025-07-06T23:53:23.054631869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:53:23.067587 containerd[1471]: time="2025-07-06T23:53:23.051774032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:23.070980 containerd[1471]: time="2025-07-06T23:53:23.070927617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.802318747s" Jul 6 23:53:23.072277 containerd[1471]: time="2025-07-06T23:53:23.072223129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:53:23.072428 containerd[1471]: time="2025-07-06T23:53:23.071321160Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:23.076176 containerd[1471]: time="2025-07-06T23:53:23.075672085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.059 [WARNING][5489] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58", Pod:"coredns-668d6bf9bc-5fvn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d2abf1e8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.059 [INFO][5489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.059 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" iface="eth0" netns="" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.059 [INFO][5489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.061 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.115 [INFO][5496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.117 [INFO][5496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.118 [INFO][5496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.131 [WARNING][5496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.131 [INFO][5496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.134 [INFO][5496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.141060 containerd[1471]: 2025-07-06 23:53:23.137 [INFO][5489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.141060 containerd[1471]: time="2025-07-06T23:53:23.140845951Z" level=info msg="TearDown network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" successfully" Jul 6 23:53:23.141060 containerd[1471]: time="2025-07-06T23:53:23.140896850Z" level=info msg="StopPodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" returns successfully" Jul 6 23:53:23.146046 containerd[1471]: time="2025-07-06T23:53:23.146006549Z" level=info msg="CreateContainer within sandbox \"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:53:23.157312 containerd[1471]: time="2025-07-06T23:53:23.156599663Z" level=info msg="RemovePodSandbox for \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" Jul 6 23:53:23.157312 containerd[1471]: time="2025-07-06T23:53:23.156670121Z" level=info msg="Forcibly stopping sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\"" Jul 6 23:53:23.178191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368476640.mount: Deactivated successfully. Jul 6 23:53:23.188411 containerd[1471]: time="2025-07-06T23:53:23.188271565Z" level=info msg="CreateContainer within sandbox \"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e4f1126684507bea0c51b6e7e562c37a91e2672141250308dc356f1eaf5db4f7\"" Jul 6 23:53:23.191560 containerd[1471]: time="2025-07-06T23:53:23.191477056Z" level=info msg="StartContainer for \"e4f1126684507bea0c51b6e7e562c37a91e2672141250308dc356f1eaf5db4f7\"" Jul 6 23:53:23.254272 systemd[1]: Started cri-containerd-e4f1126684507bea0c51b6e7e562c37a91e2672141250308dc356f1eaf5db4f7.scope - libcontainer container e4f1126684507bea0c51b6e7e562c37a91e2672141250308dc356f1eaf5db4f7. Jul 6 23:53:23.361022 containerd[1471]: time="2025-07-06T23:53:23.360780169Z" level=info msg="StartContainer for \"e4f1126684507bea0c51b6e7e562c37a91e2672141250308dc356f1eaf5db4f7\" returns successfully" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.276 [WARNING][5510] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e94334e-f3fd-4a19-bf8e-6d83c5d49a81", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"111bad3a01f9529d5441fd892b1dfebbcc2c8ade20c3617b7a3a2fdcb65b7c58", Pod:"coredns-668d6bf9bc-5fvn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d2abf1e8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.276 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.276 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" iface="eth0" netns="" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.276 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.276 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.340 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.340 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.340 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.350 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.350 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" HandleID="k8s-pod-network.900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-coredns--668d6bf9bc--5fvn5-eth0" Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.356 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.362649 containerd[1471]: 2025-07-06 23:53:23.358 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9" Jul 6 23:53:23.363718 containerd[1471]: time="2025-07-06T23:53:23.362812224Z" level=info msg="TearDown network for sandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" successfully" Jul 6 23:53:23.369518 containerd[1471]: time="2025-07-06T23:53:23.369193319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:23.369518 containerd[1471]: time="2025-07-06T23:53:23.369388025Z" level=info msg="RemovePodSandbox \"900f8e111fbcc4966d3cfef5c5a081a544323ef84e8c3835577aa2a38c90c6d9\" returns successfully" Jul 6 23:53:23.370901 containerd[1471]: time="2025-07-06T23:53:23.370462146Z" level=info msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.426 [WARNING][5573] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"a024894b-79af-4474-8f3a-f963becd00ab", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9", Pod:"calico-apiserver-6b5bcb5877-x5v6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali192042aa21c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.427 [INFO][5573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.427 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" iface="eth0" netns="" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.427 [INFO][5573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.427 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.458 [INFO][5584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.458 [INFO][5584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.458 [INFO][5584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.469 [WARNING][5584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.469 [INFO][5584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.471 [INFO][5584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.475803 containerd[1471]: 2025-07-06 23:53:23.473 [INFO][5573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.476432 containerd[1471]: time="2025-07-06T23:53:23.476385428Z" level=info msg="TearDown network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" successfully" Jul 6 23:53:23.476432 containerd[1471]: time="2025-07-06T23:53:23.476422494Z" level=info msg="StopPodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" returns successfully" Jul 6 23:53:23.477419 containerd[1471]: time="2025-07-06T23:53:23.477363607Z" level=info msg="RemovePodSandbox for \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" Jul 6 23:53:23.477495 containerd[1471]: time="2025-07-06T23:53:23.477426931Z" level=info msg="Forcibly stopping sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\"" Jul 6 23:53:23.541212 kubelet[2497]: I0706 23:53:23.541123 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r52x4" podStartSLOduration=25.976019859 podStartE2EDuration="41.541100595s" podCreationTimestamp="2025-07-06 23:52:42 +0000 UTC" firstStartedPulling="2025-07-06 23:53:07.537587031 +0000 UTC m=+48.037970259" lastFinishedPulling="2025-07-06 23:53:23.10266778 +0000 UTC m=+63.603050995" observedRunningTime="2025-07-06 23:53:23.539830854 +0000 UTC m=+64.040214090" watchObservedRunningTime="2025-07-06 23:53:23.541100595 +0000 UTC m=+64.041483831" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.567 [WARNING][5598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0", GenerateName:"calico-apiserver-6b5bcb5877-", Namespace:"calico-apiserver", SelfLink:"", UID:"a024894b-79af-4474-8f3a-f963becd00ab", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5bcb5877", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"455e0262cf5cefa4538d86b47bc463cc3f9e2dac9e82e620bd87d297cda43eb9", Pod:"calico-apiserver-6b5bcb5877-x5v6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali192042aa21c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.568 [INFO][5598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.569 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" iface="eth0" netns="" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.569 [INFO][5598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.570 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.631 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.632 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.632 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.641 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.641 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" HandleID="k8s-pod-network.91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-calico--apiserver--6b5bcb5877--x5v6z-eth0" Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.645 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.651805 containerd[1471]: 2025-07-06 23:53:23.648 [INFO][5598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769" Jul 6 23:53:23.653568 containerd[1471]: time="2025-07-06T23:53:23.652160927Z" level=info msg="TearDown network for sandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" successfully" Jul 6 23:53:23.662679 containerd[1471]: time="2025-07-06T23:53:23.658534706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:23.664165 containerd[1471]: time="2025-07-06T23:53:23.664017567Z" level=info msg="RemovePodSandbox \"91ef19fe89d074ff7ce13e5467696ae78dbb3a68e50d50a7b4a678d3658a4769\" returns successfully" Jul 6 23:53:23.737774 containerd[1471]: time="2025-07-06T23:53:23.736880320Z" level=info msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.819 [WARNING][5641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d7dcac2-ec06-4a54-afc7-632e8abadb5b", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97", Pod:"csi-node-driver-r52x4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid627b1c498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.819 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.819 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" iface="eth0" netns="" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.819 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.819 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.855 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.856 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.856 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.863 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.863 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.865 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.873016 containerd[1471]: 2025-07-06 23:53:23.868 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.873016 containerd[1471]: time="2025-07-06T23:53:23.873082340Z" level=info msg="TearDown network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" successfully" Jul 6 23:53:23.874655 containerd[1471]: time="2025-07-06T23:53:23.873106980Z" level=info msg="StopPodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" returns successfully" Jul 6 23:53:23.874655 containerd[1471]: time="2025-07-06T23:53:23.874268927Z" level=info msg="RemovePodSandbox for \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" Jul 6 23:53:23.874655 containerd[1471]: time="2025-07-06T23:53:23.874301010Z" level=info msg="Forcibly stopping sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\"" Jul 6 23:53:23.918178 kubelet[2497]: I0706 23:53:23.916700 2497 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:53:23.918178 kubelet[2497]: I0706 23:53:23.918022 2497 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.927 [WARNING][5662] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d7dcac2-ec06-4a54-afc7-632e8abadb5b", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-c-43d64a8ca6", ContainerID:"f1797bf2f5b0c9795db2f465e246593f3566b501aee81563c7b0a1c82c299c97", Pod:"csi-node-driver-r52x4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid627b1c498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.928 [INFO][5662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.928 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" iface="eth0" netns="" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.928 [INFO][5662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.928 [INFO][5662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.967 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.967 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.967 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.977 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.977 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" HandleID="k8s-pod-network.3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Workload="ci--4081.3.4--c--43d64a8ca6-k8s-csi--node--driver--r52x4-eth0" Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.979 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:23.984219 containerd[1471]: 2025-07-06 23:53:23.982 [INFO][5662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1" Jul 6 23:53:23.984713 containerd[1471]: time="2025-07-06T23:53:23.984263247Z" level=info msg="TearDown network for sandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" successfully" Jul 6 23:53:23.986612 containerd[1471]: time="2025-07-06T23:53:23.986540809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:23.986801 containerd[1471]: time="2025-07-06T23:53:23.986624396Z" level=info msg="RemovePodSandbox \"3119a2b860effbd3b9757c9a070a11fd2e79134d3fdee1b0462b196a1ef367f1\" returns successfully" Jul 6 23:53:24.054825 systemd[1]: Started sshd@10-209.38.68.255:22-139.178.89.65:49702.service - OpenSSH per-connection server daemon (139.178.89.65:49702). Jul 6 23:53:24.179857 sshd[5676]: Accepted publickey for core from 139.178.89.65 port 49702 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:24.183051 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:24.191584 systemd-logind[1445]: New session 10 of user core. Jul 6 23:53:24.193206 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:53:24.859308 sshd[5676]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:24.873547 systemd[1]: sshd@10-209.38.68.255:22-139.178.89.65:49702.service: Deactivated successfully. Jul 6 23:53:24.876009 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:53:24.877563 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:53:24.883470 systemd[1]: Started sshd@11-209.38.68.255:22-139.178.89.65:49710.service - OpenSSH per-connection server daemon (139.178.89.65:49710). Jul 6 23:53:24.886904 systemd-logind[1445]: Removed session 10. Jul 6 23:53:24.944527 sshd[5690]: Accepted publickey for core from 139.178.89.65 port 49710 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:24.946457 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:24.952670 systemd-logind[1445]: New session 11 of user core. Jul 6 23:53:24.962294 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:53:25.211539 sshd[5690]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:25.223150 systemd[1]: sshd@11-209.38.68.255:22-139.178.89.65:49710.service: Deactivated successfully. Jul 6 23:53:25.226493 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:53:25.227914 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:53:25.236351 systemd[1]: Started sshd@12-209.38.68.255:22-139.178.89.65:49716.service - OpenSSH per-connection server daemon (139.178.89.65:49716). Jul 6 23:53:25.240470 systemd-logind[1445]: Removed session 11. Jul 6 23:53:25.301171 sshd[5701]: Accepted publickey for core from 139.178.89.65 port 49716 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:25.303736 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:25.311323 systemd-logind[1445]: New session 12 of user core. Jul 6 23:53:25.316307 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:53:25.466605 sshd[5701]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:25.471783 systemd[1]: sshd@12-209.38.68.255:22-139.178.89.65:49716.service: Deactivated successfully. Jul 6 23:53:25.475117 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:53:25.476211 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:53:25.477357 systemd-logind[1445]: Removed session 12. Jul 6 23:53:30.484410 systemd[1]: Started sshd@13-209.38.68.255:22-139.178.89.65:34292.service - OpenSSH per-connection server daemon (139.178.89.65:34292). Jul 6 23:53:30.559173 sshd[5723]: Accepted publickey for core from 139.178.89.65 port 34292 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:30.562048 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:30.568431 systemd-logind[1445]: New session 13 of user core. Jul 6 23:53:30.573206 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:53:30.807085 sshd[5723]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:30.815769 systemd[1]: sshd@13-209.38.68.255:22-139.178.89.65:34292.service: Deactivated successfully. Jul 6 23:53:30.822170 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:53:30.823752 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:53:30.826730 systemd-logind[1445]: Removed session 13. Jul 6 23:53:35.830436 systemd[1]: Started sshd@14-209.38.68.255:22-139.178.89.65:34304.service - OpenSSH per-connection server daemon (139.178.89.65:34304). Jul 6 23:53:35.959098 sshd[5764]: Accepted publickey for core from 139.178.89.65 port 34304 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:35.961580 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:35.970259 systemd-logind[1445]: New session 14 of user core. Jul 6 23:53:35.977552 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:53:36.385180 sshd[5764]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:36.392181 systemd[1]: sshd@14-209.38.68.255:22-139.178.89.65:34304.service: Deactivated successfully. Jul 6 23:53:36.395731 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:53:36.398745 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:53:36.400218 systemd-logind[1445]: Removed session 14. Jul 6 23:53:38.717131 kubelet[2497]: E0706 23:53:38.716795 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:38.772907 kubelet[2497]: I0706 23:53:38.772300 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:53:41.405839 systemd[1]: Started sshd@15-209.38.68.255:22-139.178.89.65:43298.service - OpenSSH per-connection server daemon (139.178.89.65:43298). Jul 6 23:53:41.525193 sshd[5779]: Accepted publickey for core from 139.178.89.65 port 43298 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:41.528267 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:41.534875 systemd-logind[1445]: New session 15 of user core. Jul 6 23:53:41.540551 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:53:41.978012 sshd[5779]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:41.987688 systemd[1]: sshd@15-209.38.68.255:22-139.178.89.65:43298.service: Deactivated successfully. Jul 6 23:53:41.992245 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:53:41.996385 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:53:41.998257 systemd-logind[1445]: Removed session 15. Jul 6 23:53:42.184692 systemd[1]: run-containerd-runc-k8s.io-06ff6b56f6fe8c51a54356132db6e67f237f7c08a0d1b094dbdb2c237957c049-runc.3PFzsv.mount: Deactivated successfully. Jul 6 23:53:46.996386 systemd[1]: Started sshd@16-209.38.68.255:22-139.178.89.65:43302.service - OpenSSH per-connection server daemon (139.178.89.65:43302). Jul 6 23:53:47.131396 sshd[5820]: Accepted publickey for core from 139.178.89.65 port 43302 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:47.133940 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:47.140446 systemd-logind[1445]: New session 16 of user core. Jul 6 23:53:47.143194 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:53:47.505254 sshd[5820]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:47.518393 systemd[1]: sshd@16-209.38.68.255:22-139.178.89.65:43302.service: Deactivated successfully. Jul 6 23:53:47.521841 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:53:47.524294 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:53:47.531387 systemd[1]: Started sshd@17-209.38.68.255:22-139.178.89.65:43308.service - OpenSSH per-connection server daemon (139.178.89.65:43308). Jul 6 23:53:47.533648 systemd-logind[1445]: Removed session 16. Jul 6 23:53:47.607655 sshd[5832]: Accepted publickey for core from 139.178.89.65 port 43308 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:47.609947 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:47.615770 systemd-logind[1445]: New session 17 of user core. Jul 6 23:53:47.627232 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:53:47.938494 sshd[5832]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:47.953081 systemd[1]: sshd@17-209.38.68.255:22-139.178.89.65:43308.service: Deactivated successfully. Jul 6 23:53:47.955910 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:53:47.956771 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:53:47.965419 systemd[1]: Started sshd@18-209.38.68.255:22-139.178.89.65:43312.service - OpenSSH per-connection server daemon (139.178.89.65:43312). Jul 6 23:53:47.970211 systemd-logind[1445]: Removed session 17. Jul 6 23:53:48.037956 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 43312 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:48.041310 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:48.051098 systemd-logind[1445]: New session 18 of user core. Jul 6 23:53:48.056223 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:53:49.176718 sshd[5848]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:49.194187 systemd[1]: Started sshd@19-209.38.68.255:22-139.178.89.65:43328.service - OpenSSH per-connection server daemon (139.178.89.65:43328). Jul 6 23:53:49.198713 systemd[1]: sshd@18-209.38.68.255:22-139.178.89.65:43312.service: Deactivated successfully. Jul 6 23:53:49.206770 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:53:49.215133 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:53:49.224513 systemd-logind[1445]: Removed session 18. Jul 6 23:53:49.302469 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 43328 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:49.305887 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:49.313498 systemd-logind[1445]: New session 19 of user core. Jul 6 23:53:49.320220 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:53:50.001319 sshd[5863]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:50.023552 systemd[1]: Started sshd@20-209.38.68.255:22-139.178.89.65:60488.service - OpenSSH per-connection server daemon (139.178.89.65:60488). Jul 6 23:53:50.024528 systemd[1]: sshd@19-209.38.68.255:22-139.178.89.65:43328.service: Deactivated successfully. Jul 6 23:53:50.033530 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:53:50.039389 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:53:50.043223 systemd-logind[1445]: Removed session 19. Jul 6 23:53:50.103023 sshd[5879]: Accepted publickey for core from 139.178.89.65 port 60488 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:50.104642 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:50.112425 systemd-logind[1445]: New session 20 of user core. Jul 6 23:53:50.118341 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:53:50.257142 sshd[5879]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:50.261621 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:53:50.264565 systemd[1]: sshd@20-209.38.68.255:22-139.178.89.65:60488.service: Deactivated successfully. Jul 6 23:53:50.267591 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:53:50.269591 systemd-logind[1445]: Removed session 20. Jul 6 23:53:50.674692 kubelet[2497]: E0706 23:53:50.674622 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:52.183795 kubelet[2497]: I0706 23:53:52.183600 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:53:55.280454 systemd[1]: Started sshd@21-209.38.68.255:22-139.178.89.65:60494.service - OpenSSH per-connection server daemon (139.178.89.65:60494). Jul 6 23:53:55.387143 sshd[5917]: Accepted publickey for core from 139.178.89.65 port 60494 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:55.388814 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:55.398297 systemd-logind[1445]: New session 21 of user core. Jul 6 23:53:55.402452 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:53:55.817147 sshd[5917]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:55.826825 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:53:55.828131 systemd[1]: sshd@21-209.38.68.255:22-139.178.89.65:60494.service: Deactivated successfully. Jul 6 23:53:55.833646 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:53:55.837688 systemd-logind[1445]: Removed session 21. Jul 6 23:53:56.651859 kubelet[2497]: E0706 23:53:56.651487 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:56.651859 kubelet[2497]: E0706 23:53:56.651618 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:54:00.851449 systemd[1]: Started sshd@22-209.38.68.255:22-139.178.89.65:45250.service - OpenSSH per-connection server daemon (139.178.89.65:45250). Jul 6 23:54:00.970925 sshd[5955]: Accepted publickey for core from 139.178.89.65 port 45250 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:00.973768 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:00.980743 systemd-logind[1445]: New session 22 of user core. Jul 6 23:54:00.984221 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:54:01.659145 sshd[5955]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:01.666576 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:54:01.667026 systemd[1]: sshd@22-209.38.68.255:22-139.178.89.65:45250.service: Deactivated successfully. Jul 6 23:54:01.672483 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:54:01.679355 systemd-logind[1445]: Removed session 22. Jul 6 23:54:06.684604 systemd[1]: Started sshd@23-209.38.68.255:22-139.178.89.65:45264.service - OpenSSH per-connection server daemon (139.178.89.65:45264). Jul 6 23:54:06.833850 sshd[5989]: Accepted publickey for core from 139.178.89.65 port 45264 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:06.836435 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:06.851725 systemd-logind[1445]: New session 23 of user core. Jul 6 23:54:06.858881 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:54:07.909249 sshd[5989]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:07.913310 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:54:07.915808 systemd[1]: sshd@23-209.38.68.255:22-139.178.89.65:45264.service: Deactivated successfully. Jul 6 23:54:07.918677 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:54:07.920205 systemd-logind[1445]: Removed session 23. Jul 6 23:54:08.652018 kubelet[2497]: E0706 23:54:08.651928 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"