Aug 13 07:06:18.930640 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:06:18.930681 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:18.930729 kernel: BIOS-provided physical RAM map: Aug 13 07:06:18.930741 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:06:18.930751 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:06:18.930760 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:06:18.930772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:06:18.930782 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:06:18.930792 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:06:18.930807 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:06:18.930817 kernel: NX (Execute Disable) protection: active Aug 13 07:06:18.930827 kernel: APIC: Static calls initialized Aug 13 07:06:18.930856 kernel: SMBIOS 2.8 present. Aug 13 07:06:18.930867 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:06:18.930880 kernel: Hypervisor detected: KVM Aug 13 07:06:18.930895 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:06:18.930910 kernel: kvm-clock: using sched offset of 3413334361 cycles Aug 13 07:06:18.930921 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:06:18.930933 kernel: tsc: Detected 2494.140 MHz processor Aug 13 07:06:18.930944 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:06:18.930956 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:06:18.930968 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:06:18.930979 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:06:18.930990 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:06:18.931006 kernel: ACPI: Early table checksum verification disabled Aug 13 07:06:18.931017 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:06:18.931030 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.931042 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.931054 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.931065 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:06:18.931076 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.931087 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.931100 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.932157 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:18.932184 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:06:18.932193 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:06:18.932203 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:06:18.932211 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:06:18.932219 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:06:18.932228 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:06:18.932248 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:06:18.932257 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:06:18.932265 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:06:18.932274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:06:18.932283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:06:18.932297 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:06:18.932307 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:06:18.932320 kernel: Zone ranges: Aug 13 07:06:18.932329 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:06:18.932337 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:06:18.932346 kernel: Normal empty Aug 13 07:06:18.932354 kernel: Movable zone start for each node Aug 13 07:06:18.932363 kernel: Early memory node ranges Aug 13 07:06:18.932371 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:06:18.932380 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:06:18.932388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:06:18.932400 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:06:18.932408 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:06:18.932420 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:06:18.932428 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:06:18.932437 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:06:18.932468 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:06:18.932477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:06:18.932486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:06:18.932494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:06:18.932507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:06:18.932515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:06:18.932524 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:06:18.932532 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:06:18.932540 kernel: TSC deadline timer available Aug 13 07:06:18.932549 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:06:18.932557 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:06:18.932565 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:06:18.932577 kernel: Booting paravirtualized kernel on KVM Aug 13 07:06:18.932586 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:06:18.932598 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:06:18.932607 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:06:18.932615 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:06:18.932623 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:06:18.932632 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:06:18.932642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:18.932651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:06:18.932659 kernel: random: crng init done Aug 13 07:06:18.932672 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:06:18.932680 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:06:18.932689 kernel: Fallback order for Node 0: 0 Aug 13 07:06:18.932697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:06:18.932706 kernel: Policy zone: DMA32 Aug 13 07:06:18.932714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:06:18.932723 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:06:18.932732 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:06:18.932744 kernel: Kernel/User page tables isolation: enabled Aug 13 07:06:18.932753 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:06:18.932761 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:06:18.932770 kernel: Dynamic Preempt: voluntary Aug 13 07:06:18.932778 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:06:18.932788 kernel: rcu: RCU event tracing is enabled. Aug 13 07:06:18.932797 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:06:18.932806 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:06:18.932814 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:06:18.932822 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:06:18.932834 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:06:18.932842 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:06:18.932851 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:06:18.932859 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:06:18.932870 kernel: Console: colour VGA+ 80x25 Aug 13 07:06:18.932878 kernel: printk: console [tty0] enabled Aug 13 07:06:18.932886 kernel: printk: console [ttyS0] enabled Aug 13 07:06:18.932895 kernel: ACPI: Core revision 20230628 Aug 13 07:06:18.932903 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:06:18.932915 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:06:18.932924 kernel: x2apic enabled Aug 13 07:06:18.932932 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:06:18.932941 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:06:18.932949 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:06:18.932957 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 07:06:18.932966 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:06:18.932975 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:06:18.932997 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:06:18.933006 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:06:18.933015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:06:18.933027 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:06:18.933036 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:06:18.933045 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:06:18.933054 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:06:18.933063 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:06:18.933072 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:06:18.933086 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:06:18.933096 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:06:18.933104 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:06:18.933113 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:06:18.935174 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:06:18.935186 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:06:18.935196 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:06:18.935205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:06:18.935221 kernel: landlock: Up and running. Aug 13 07:06:18.935230 kernel: SELinux: Initializing. Aug 13 07:06:18.935239 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:06:18.935249 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:06:18.935258 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:06:18.935267 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:18.935277 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:18.935286 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:18.935295 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:06:18.935308 kernel: signal: max sigframe size: 1776 Aug 13 07:06:18.935317 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:06:18.935328 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:06:18.935337 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:06:18.935346 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:06:18.935355 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:06:18.935364 kernel: .... node #0, CPUs: #1 Aug 13 07:06:18.935373 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:06:18.935389 kernel: smpboot: Max logical packages: 1 Aug 13 07:06:18.935402 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 07:06:18.935411 kernel: devtmpfs: initialized Aug 13 07:06:18.935420 kernel: x86/mm: Memory block size: 128MB Aug 13 07:06:18.935429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:06:18.935439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:06:18.935448 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:06:18.935457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:06:18.935466 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:06:18.935475 kernel: audit: type=2000 audit(1755068778.593:1): state=initialized audit_enabled=0 res=1 Aug 13 07:06:18.935488 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:06:18.935497 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:06:18.935506 kernel: cpuidle: using governor menu Aug 13 07:06:18.935516 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:06:18.935525 kernel: dca service started, version 1.12.1 Aug 13 07:06:18.935534 kernel: PCI: Using configuration type 1 for base access Aug 13 07:06:18.935543 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:06:18.935552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:06:18.935562 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:06:18.935574 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:06:18.935583 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:06:18.935593 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:06:18.935602 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:06:18.935611 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:06:18.935620 kernel: ACPI: Interpreter enabled Aug 13 07:06:18.935629 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:06:18.935638 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:06:18.935647 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:06:18.935659 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:06:18.935668 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:06:18.935677 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:06:18.935909 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:06:18.936018 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:06:18.936130 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:06:18.936143 kernel: acpiphp: Slot [3] registered Aug 13 07:06:18.936157 kernel: acpiphp: Slot [4] registered Aug 13 07:06:18.936166 kernel: acpiphp: Slot [5] registered Aug 13 07:06:18.936175 kernel: acpiphp: Slot [6] registered Aug 13 07:06:18.936184 kernel: acpiphp: Slot [7] registered Aug 13 07:06:18.936193 kernel: acpiphp: Slot [8] registered Aug 13 07:06:18.936202 kernel: acpiphp: Slot [9] registered Aug 13 07:06:18.936211 kernel: acpiphp: Slot [10] registered Aug 13 07:06:18.936220 kernel: acpiphp: Slot [11] registered Aug 13 07:06:18.936229 kernel: acpiphp: Slot [12] registered Aug 13 07:06:18.936242 kernel: acpiphp: Slot [13] registered Aug 13 07:06:18.936251 kernel: acpiphp: Slot [14] registered Aug 13 07:06:18.936260 kernel: acpiphp: Slot [15] registered Aug 13 07:06:18.936269 kernel: acpiphp: Slot [16] registered Aug 13 07:06:18.936278 kernel: acpiphp: Slot [17] registered Aug 13 07:06:18.936287 kernel: acpiphp: Slot [18] registered Aug 13 07:06:18.936296 kernel: acpiphp: Slot [19] registered Aug 13 07:06:18.936305 kernel: acpiphp: Slot [20] registered Aug 13 07:06:18.936314 kernel: acpiphp: Slot [21] registered Aug 13 07:06:18.936323 kernel: acpiphp: Slot [22] registered Aug 13 07:06:18.936336 kernel: acpiphp: Slot [23] registered Aug 13 07:06:18.936345 kernel: acpiphp: Slot [24] registered Aug 13 07:06:18.936354 kernel: acpiphp: Slot [25] registered Aug 13 07:06:18.936362 kernel: acpiphp: Slot [26] registered Aug 13 07:06:18.936371 kernel: acpiphp: Slot [27] registered Aug 13 07:06:18.936380 kernel: acpiphp: Slot [28] registered Aug 13 07:06:18.936389 kernel: acpiphp: Slot [29] registered Aug 13 07:06:18.936398 kernel: acpiphp: Slot [30] registered Aug 13 07:06:18.936407 kernel: acpiphp: Slot [31] registered Aug 13 07:06:18.936419 kernel: PCI host bridge to bus 0000:00 Aug 13 07:06:18.936538 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:06:18.936626 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:06:18.936712 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:06:18.936797 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:06:18.936882 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:06:18.936967 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:06:18.937105 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:06:18.938000 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:06:18.938242 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:06:18.938354 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:06:18.938455 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:06:18.938570 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:06:18.938710 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:06:18.938836 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:06:18.938965 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:06:18.939071 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:06:18.942393 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:06:18.942566 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:06:18.942694 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:06:18.942836 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:06:18.942947 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:06:18.943066 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:06:18.943300 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:06:18.943411 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:06:18.943533 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:06:18.943649 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:06:18.943774 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:06:18.943881 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:06:18.944005 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:06:18.944163 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:06:18.944279 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:06:18.944396 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:06:18.944512 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:06:18.944670 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:06:18.944800 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:06:18.944929 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:06:18.945048 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:06:18.947296 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:06:18.947460 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:06:18.947572 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:06:18.947683 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:06:18.947796 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:06:18.947893 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:06:18.947990 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:06:18.948090 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:06:18.948214 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:06:18.948326 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:06:18.948423 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:06:18.948435 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:06:18.948445 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:06:18.948454 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:06:18.948466 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:06:18.948480 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:06:18.948499 kernel: iommu: Default domain type: Translated Aug 13 07:06:18.948512 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:06:18.948526 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:06:18.948540 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:06:18.948552 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:06:18.948561 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:06:18.948692 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:06:18.948820 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:06:18.948937 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:06:18.948957 kernel: vgaarb: loaded Aug 13 07:06:18.948967 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:06:18.948976 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:06:18.948985 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:06:18.948995 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:06:18.949012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:06:18.949026 kernel: pnp: PnP ACPI init Aug 13 07:06:18.949035 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:06:18.949045 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:06:18.949058 kernel: NET: Registered PF_INET protocol family Aug 13 07:06:18.949067 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:06:18.949077 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:06:18.949086 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:06:18.949095 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:06:18.949104 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:06:18.949113 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:06:18.951035 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:06:18.951047 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:06:18.951070 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:06:18.951084 kernel: NET: Registered PF_XDP protocol family Aug 13 07:06:18.951295 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:06:18.951389 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:06:18.951483 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:06:18.951593 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:06:18.951681 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:06:18.951792 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:06:18.951908 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:06:18.951923 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:06:18.952041 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29241 usecs Aug 13 07:06:18.952062 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:06:18.952072 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:06:18.952082 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:06:18.952091 kernel: Initialise system trusted keyrings Aug 13 07:06:18.952101 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:06:18.953182 kernel: Key type asymmetric registered Aug 13 07:06:18.953201 kernel: Asymmetric key parser 'x509' registered Aug 13 07:06:18.953212 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:06:18.953223 kernel: io scheduler mq-deadline registered Aug 13 07:06:18.953233 kernel: io scheduler kyber registered Aug 13 07:06:18.953243 kernel: io scheduler bfq registered Aug 13 07:06:18.953252 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:06:18.953263 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:06:18.953273 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:06:18.953300 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:06:18.953351 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:06:18.953384 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:06:18.953397 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:06:18.953412 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:06:18.953424 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:06:18.953619 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:06:18.953636 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:06:18.953735 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:06:18.953855 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:06:18 UTC (1755068778) Aug 13 07:06:18.953960 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:06:18.953972 kernel: intel_pstate: CPU model not supported Aug 13 07:06:18.953982 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:06:18.953992 kernel: Segment Routing with IPv6 Aug 13 07:06:18.954001 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:06:18.954011 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:06:18.954020 kernel: Key type dns_resolver registered Aug 13 07:06:18.954039 kernel: IPI shorthand broadcast: enabled Aug 13 07:06:18.954053 kernel: sched_clock: Marking stable (948005231, 93187997)->(1145908971, -104715743) Aug 13 07:06:18.954082 kernel: registered taskstats version 1 Aug 13 07:06:18.954095 kernel: Loading compiled-in X.509 certificates Aug 13 07:06:18.954108 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:06:18.955185 kernel: Key type .fscrypt registered Aug 13 07:06:18.955198 kernel: Key type fscrypt-provisioning registered Aug 13 07:06:18.955208 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:06:18.955217 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:06:18.955233 kernel: ima: No architecture policies found Aug 13 07:06:18.955242 kernel: clk: Disabling unused clocks Aug 13 07:06:18.955251 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:06:18.955260 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:06:18.955270 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:06:18.955307 kernel: Run /init as init process Aug 13 07:06:18.955320 kernel: with arguments: Aug 13 07:06:18.955330 kernel: /init Aug 13 07:06:18.955340 kernel: with environment: Aug 13 07:06:18.955353 kernel: HOME=/ Aug 13 07:06:18.955362 kernel: TERM=linux Aug 13 07:06:18.955372 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:06:18.955386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:06:18.955400 systemd[1]: Detected virtualization kvm. Aug 13 07:06:18.955417 systemd[1]: Detected architecture x86-64. Aug 13 07:06:18.955430 systemd[1]: Running in initrd. Aug 13 07:06:18.955445 systemd[1]: No hostname configured, using default hostname. Aug 13 07:06:18.955464 systemd[1]: Hostname set to . Aug 13 07:06:18.955485 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:06:18.955502 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:06:18.955513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:18.955523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:18.955535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:06:18.955545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:06:18.955559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:06:18.955569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:06:18.955581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:06:18.955591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:06:18.955601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:18.955611 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:18.955621 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:06:18.955635 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:06:18.955645 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:06:18.955655 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:06:18.955669 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:06:18.955679 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:06:18.955690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:06:18.955712 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:06:18.955728 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:18.955740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:18.955750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:18.955765 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:06:18.955775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:06:18.955786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:06:18.955796 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:06:18.955810 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:06:18.955824 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:06:18.955834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:06:18.955845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:18.955855 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:06:18.955865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:18.955875 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:06:18.955937 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:06:18.955963 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:06:18.955979 systemd-journald[183]: Journal started Aug 13 07:06:18.956002 systemd-journald[183]: Runtime Journal (/run/log/journal/0c426f782ef94866b6b5d450521c6e66) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:06:18.956430 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:06:18.988291 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:06:18.988329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:06:18.985292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:18.992161 kernel: Bridge firewalling registered Aug 13 07:06:18.990234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:18.991253 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:06:18.993376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:19.004386 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:19.007334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:19.009051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:06:19.023032 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:06:19.037073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:19.038964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:19.041042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:19.043160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:19.050467 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:06:19.056411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:06:19.074032 dracut-cmdline[219]: dracut-dracut-053 Aug 13 07:06:19.077870 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:19.090903 systemd-resolved[221]: Positive Trust Anchors: Aug 13 07:06:19.091757 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:06:19.091811 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:06:19.096987 systemd-resolved[221]: Defaulting to hostname 'linux'. Aug 13 07:06:19.099194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:06:19.100328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:19.193169 kernel: SCSI subsystem initialized Aug 13 07:06:19.206196 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:06:19.221143 kernel: iscsi: registered transport (tcp) Aug 13 07:06:19.249179 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:06:19.249266 kernel: QLogic iSCSI HBA Driver Aug 13 07:06:19.316557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:06:19.324481 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:06:19.379483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:06:19.379594 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:06:19.379618 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:06:19.431202 kernel: raid6: avx2x4 gen() 14564 MB/s Aug 13 07:06:19.448181 kernel: raid6: avx2x2 gen() 14708 MB/s Aug 13 07:06:19.465749 kernel: raid6: avx2x1 gen() 11835 MB/s Aug 13 07:06:19.465890 kernel: raid6: using algorithm avx2x2 gen() 14708 MB/s Aug 13 07:06:19.483344 kernel: raid6: .... xor() 13680 MB/s, rmw enabled Aug 13 07:06:19.483491 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:06:19.512197 kernel: xor: automatically using best checksumming function avx Aug 13 07:06:19.730668 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:06:19.750034 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:06:19.757504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:19.780646 systemd-udevd[404]: Using default interface naming scheme 'v255'. Aug 13 07:06:19.785990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:19.794384 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:06:19.820282 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Aug 13 07:06:19.867009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:06:19.883595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:06:19.947716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:19.955586 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:06:19.978199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:06:19.980277 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:06:19.981509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:19.982408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:06:19.990401 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:06:20.021281 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:06:20.047159 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:06:20.054240 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:06:20.058432 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:06:20.074180 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:06:20.081199 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:06:20.081676 kernel: GPT:9289727 != 125829119 Aug 13 07:06:20.081697 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:06:20.083322 kernel: GPT:9289727 != 125829119 Aug 13 07:06:20.083380 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:06:20.084770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:20.107334 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:06:20.107626 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 07:06:20.121790 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:06:20.121857 kernel: AES CTR mode by8 optimization enabled Aug 13 07:06:20.124317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:06:20.124465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:20.126194 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:20.126707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:20.126865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:20.127582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:20.133543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:20.149219 kernel: ACPI: bus type USB registered Aug 13 07:06:20.149247 kernel: usbcore: registered new interface driver usbfs Aug 13 07:06:20.149268 kernel: usbcore: registered new interface driver hub Aug 13 07:06:20.149280 kernel: usbcore: registered new device driver usb Aug 13 07:06:20.154169 kernel: libata version 3.00 loaded. Aug 13 07:06:20.193565 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:06:20.207168 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Aug 13 07:06:20.212180 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Aug 13 07:06:20.216140 kernel: scsi host1: ata_piix Aug 13 07:06:20.227172 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:06:20.227459 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:06:20.227585 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:06:20.227734 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:06:20.230146 kernel: hub 1-0:1.0: USB hub found Aug 13 07:06:20.232888 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:06:20.233096 kernel: scsi host2: ata_piix Aug 13 07:06:20.233305 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:06:20.233321 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:06:20.246539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:06:20.270017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:20.282093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:06:20.287638 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:06:20.288389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:06:20.294959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:06:20.300358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:06:20.304652 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:20.309498 disk-uuid[542]: Primary Header is updated. Aug 13 07:06:20.309498 disk-uuid[542]: Secondary Entries is updated. Aug 13 07:06:20.309498 disk-uuid[542]: Secondary Header is updated. Aug 13 07:06:20.323145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:20.328349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:20.335410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:21.342204 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:21.343315 disk-uuid[544]: The operation has completed successfully. Aug 13 07:06:21.390968 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:06:21.391089 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:06:21.402368 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:06:21.410033 sh[565]: Success Aug 13 07:06:21.425523 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:06:21.492959 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:06:21.502244 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:06:21.503575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:06:21.536785 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:06:21.536871 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:21.536889 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:06:21.536905 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:06:21.537465 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:06:21.544945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:06:21.546012 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:06:21.560416 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:06:21.563421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:06:21.574761 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:21.574847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:21.574863 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:21.581249 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:21.592507 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:06:21.594281 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:21.600050 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:06:21.608527 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:06:21.694539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:06:21.705424 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:06:21.743385 systemd-networkd[748]: lo: Link UP Aug 13 07:06:21.744287 systemd-networkd[748]: lo: Gained carrier Aug 13 07:06:21.748466 systemd-networkd[748]: Enumeration completed Aug 13 07:06:21.748616 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:06:21.749673 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:06:21.749679 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:06:21.751046 systemd[1]: Reached target network.target - Network. Aug 13 07:06:21.753585 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:06:21.753589 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:06:21.755832 systemd-networkd[748]: eth0: Link UP Aug 13 07:06:21.755838 systemd-networkd[748]: eth0: Gained carrier Aug 13 07:06:21.755851 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:06:21.762517 systemd-networkd[748]: eth1: Link UP Aug 13 07:06:21.762527 systemd-networkd[748]: eth1: Gained carrier Aug 13 07:06:21.762544 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:06:21.777253 systemd-networkd[748]: eth0: DHCPv4 address 64.23.220.168/19, gateway 64.23.192.1 acquired from 169.254.169.253 Aug 13 07:06:21.783257 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Aug 13 07:06:21.800627 ignition[648]: Ignition 2.19.0 Aug 13 07:06:21.800643 ignition[648]: Stage: fetch-offline Aug 13 07:06:21.800713 ignition[648]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:21.800731 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:21.800892 ignition[648]: parsed url from cmdline: "" Aug 13 07:06:21.800897 ignition[648]: no config URL provided Aug 13 07:06:21.800903 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:06:21.800913 ignition[648]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:06:21.800919 ignition[648]: failed to fetch config: resource requires networking Aug 13 07:06:21.804059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:06:21.801820 ignition[648]: Ignition finished successfully Aug 13 07:06:21.816790 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:06:21.837804 ignition[757]: Ignition 2.19.0 Aug 13 07:06:21.837815 ignition[757]: Stage: fetch Aug 13 07:06:21.838021 ignition[757]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:21.838032 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:21.838263 ignition[757]: parsed url from cmdline: "" Aug 13 07:06:21.838269 ignition[757]: no config URL provided Aug 13 07:06:21.838277 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:06:21.838291 ignition[757]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:06:21.838318 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:06:21.852787 ignition[757]: GET result: OK Aug 13 07:06:21.852921 ignition[757]: parsing config with SHA512: d570a4941ce536bbbf86c4f35d285bf40ff01b7994549fea135f5dfcd5a51eefaef62e22e61c7de27d4637b6703e82e3d9f65f2f906c53c190e74ea86b0f5748 Aug 13 07:06:21.859221 unknown[757]: fetched base config from "system" Aug 13 07:06:21.859235 unknown[757]: fetched base config from "system" Aug 13 07:06:21.859641 ignition[757]: fetch: fetch complete Aug 13 07:06:21.859244 unknown[757]: fetched user config from "digitalocean" Aug 13 07:06:21.859647 ignition[757]: fetch: fetch passed Aug 13 07:06:21.859710 ignition[757]: Ignition finished successfully Aug 13 07:06:21.863264 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:06:21.869399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:06:21.891217 ignition[764]: Ignition 2.19.0 Aug 13 07:06:21.891233 ignition[764]: Stage: kargs Aug 13 07:06:21.891497 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:21.891509 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:21.892597 ignition[764]: kargs: kargs passed Aug 13 07:06:21.895100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:06:21.892675 ignition[764]: Ignition finished successfully Aug 13 07:06:21.900419 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:06:21.919557 ignition[770]: Ignition 2.19.0 Aug 13 07:06:21.919568 ignition[770]: Stage: disks Aug 13 07:06:21.919794 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:21.919806 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:21.922598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:06:21.920793 ignition[770]: disks: disks passed Aug 13 07:06:21.923713 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:06:21.920851 ignition[770]: Ignition finished successfully Aug 13 07:06:21.927554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:06:21.928358 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:06:21.929254 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:06:21.929620 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:06:21.948534 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:06:21.965855 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:06:21.969506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:06:21.975245 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:06:22.079149 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:06:22.079494 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:06:22.080469 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:06:22.090321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:06:22.093597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:06:22.096702 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:06:22.101259 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:06:22.102801 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:06:22.111515 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Aug 13 07:06:22.111557 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:22.111606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:22.111619 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:22.102850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:06:22.115157 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:22.116880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:06:22.120953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:06:22.129573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:06:22.211274 coreos-metadata[789]: Aug 13 07:06:22.211 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:22.215012 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:06:22.222326 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:06:22.224049 coreos-metadata[789]: Aug 13 07:06:22.224 INFO Fetch successful Aug 13 07:06:22.225896 coreos-metadata[788]: Aug 13 07:06:22.225 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:22.229977 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:06:22.231323 coreos-metadata[789]: Aug 13 07:06:22.230 INFO wrote hostname ci-4081.3.5-4-06119f59db to /sysroot/etc/hostname Aug 13 07:06:22.233029 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:06:22.236718 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:06:22.237785 coreos-metadata[788]: Aug 13 07:06:22.237 INFO Fetch successful Aug 13 07:06:22.245705 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:06:22.246584 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:06:22.340130 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:06:22.344284 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:06:22.347312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:06:22.359514 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:22.378714 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:06:22.396238 ignition[907]: INFO : Ignition 2.19.0 Aug 13 07:06:22.398436 ignition[907]: INFO : Stage: mount Aug 13 07:06:22.398436 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.398436 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.399681 ignition[907]: INFO : mount: mount passed Aug 13 07:06:22.399681 ignition[907]: INFO : Ignition finished successfully Aug 13 07:06:22.400457 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:06:22.407350 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:06:22.533783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:06:22.540509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:06:22.566171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Aug 13 07:06:22.568292 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:22.568349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:22.569194 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:22.574163 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:22.575092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:06:22.604068 ignition[935]: INFO : Ignition 2.19.0 Aug 13 07:06:22.604068 ignition[935]: INFO : Stage: files Aug 13 07:06:22.605232 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.605232 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.606350 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:06:22.606906 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:06:22.606906 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:06:22.609816 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:06:22.610513 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:06:22.610513 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:06:22.610442 unknown[935]: wrote ssh authorized keys file for user: core Aug 13 07:06:22.612363 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:06:22.612363 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:06:22.642253 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:06:22.824760 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:06:22.826041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:06:22.832985 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:06:23.091438 systemd-networkd[748]: eth1: Gained IPv6LL Aug 13 07:06:23.326316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:06:23.411538 systemd-networkd[748]: eth0: Gained IPv6LL Aug 13 07:06:24.600630 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:06:24.600630 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:06:24.602865 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:06:24.602865 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:06:24.602865 ignition[935]: INFO : files: files passed Aug 13 07:06:24.602865 ignition[935]: INFO : Ignition finished successfully Aug 13 07:06:24.603975 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:06:24.615481 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:06:24.620335 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:06:24.622380 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:06:24.622970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:06:24.648935 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:24.648935 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:24.651199 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:24.654035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:06:24.655467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:06:24.660416 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:06:24.713522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:06:24.713696 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:06:24.715452 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:06:24.715965 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:06:24.716909 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:06:24.724373 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:06:24.741231 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:06:24.745420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:06:24.764885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:24.765692 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:24.766659 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:06:24.767540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:06:24.767822 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:06:24.769324 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:06:24.770019 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:06:24.770884 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:06:24.771863 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:06:24.772696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:06:24.773716 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:06:24.774748 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:06:24.775699 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:06:24.776729 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:06:24.777575 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:06:24.778392 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:06:24.778637 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:06:24.779748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:24.780906 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:24.781700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:06:24.781857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:24.782715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:06:24.782953 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:06:24.784451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:06:24.784649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:06:24.785563 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:06:24.785709 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:06:24.786470 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:06:24.786683 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:06:24.801715 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:06:24.806458 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:06:24.807378 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:06:24.807556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:24.809051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:06:24.812040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:06:24.818247 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:06:24.819150 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:06:24.827765 ignition[987]: INFO : Ignition 2.19.0 Aug 13 07:06:24.829417 ignition[987]: INFO : Stage: umount Aug 13 07:06:24.829417 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:24.829417 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:24.833407 ignition[987]: INFO : umount: umount passed Aug 13 07:06:24.835246 ignition[987]: INFO : Ignition finished successfully Aug 13 07:06:24.835836 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:06:24.836036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:06:24.839612 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:06:24.839761 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:06:24.841587 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:06:24.841657 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:06:24.842248 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:06:24.842298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:06:24.847344 systemd[1]: Stopped target network.target - Network. Aug 13 07:06:24.848610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:06:24.848700 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:06:24.850170 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:06:24.850563 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:06:24.855263 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:24.855869 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:06:24.856288 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:06:24.856754 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:06:24.856823 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:06:24.858421 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:06:24.858502 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:06:24.859052 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:06:24.859170 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:06:24.859895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:06:24.859952 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:06:24.861234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:06:24.862353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:06:24.864244 systemd-networkd[748]: eth1: DHCPv6 lease lost Aug 13 07:06:24.865141 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:06:24.866077 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:06:24.866857 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:06:24.868740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:06:24.868907 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:06:24.869418 systemd-networkd[748]: eth0: DHCPv6 lease lost Aug 13 07:06:24.874028 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:06:24.874341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:06:24.877544 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:06:24.877735 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:06:24.879464 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:06:24.879586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:24.885414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:06:24.885988 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:06:24.886228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:06:24.886896 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:06:24.886980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:24.888706 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:06:24.888815 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:24.889376 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:06:24.889452 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:24.897500 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:24.915575 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:06:24.916480 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:24.918339 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:06:24.919114 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:06:24.921040 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:06:24.921231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:24.921800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:06:24.921848 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:24.922734 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:06:24.922796 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:06:24.924061 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:06:24.924170 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:06:24.925106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:06:24.925203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:24.932482 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:06:24.933766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:06:24.933881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:24.936033 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:06:24.936161 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:24.937342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:06:24.937425 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:24.938739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:24.938813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:24.945446 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:06:24.945632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:06:24.947857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:06:24.957551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:06:24.969192 systemd[1]: Switching root. Aug 13 07:06:25.005751 systemd-journald[183]: Journal stopped Aug 13 07:06:26.152231 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:06:26.152350 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:06:26.152375 kernel: SELinux: policy capability open_perms=1 Aug 13 07:06:26.152394 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:06:26.152411 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:06:26.152437 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:06:26.152450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:06:26.152462 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:06:26.152475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:06:26.152496 systemd[1]: Successfully loaded SELinux policy in 46.430ms. Aug 13 07:06:26.152530 kernel: audit: type=1403 audit(1755068785.194:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:06:26.152549 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.176ms. Aug 13 07:06:26.152569 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:06:26.152590 systemd[1]: Detected virtualization kvm. Aug 13 07:06:26.152618 systemd[1]: Detected architecture x86-64. Aug 13 07:06:26.152631 systemd[1]: Detected first boot. Aug 13 07:06:26.152647 systemd[1]: Hostname set to . Aug 13 07:06:26.152660 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:06:26.152674 zram_generator::config[1029]: No configuration found. Aug 13 07:06:26.152721 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:06:26.152735 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:06:26.152762 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:06:26.152777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:06:26.152792 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:06:26.152812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:06:26.152824 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:06:26.152842 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:06:26.152855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:06:26.152869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:06:26.152882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:06:26.152899 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:06:26.152912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:26.152926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:26.152938 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:06:26.152951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:06:26.152964 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:06:26.152976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:06:26.152997 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:06:26.153009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:26.153026 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:06:26.153040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:06:26.153053 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:06:26.153065 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:06:26.153078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:26.153091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:06:26.153109 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:06:26.153141 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:06:26.153161 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:06:26.153181 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:06:26.153194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:26.153208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:26.153227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:26.153244 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:06:26.153265 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:06:26.153291 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:06:26.153309 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:06:26.153329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:26.153348 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:06:26.153373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:06:26.153394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:06:26.153410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:06:26.153427 systemd[1]: Reached target machines.target - Containers. Aug 13 07:06:26.153446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:06:26.153475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:26.153491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:06:26.153504 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:06:26.153523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:26.153536 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:06:26.153550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:26.153563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:06:26.153577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:26.153602 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:06:26.153623 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:06:26.153643 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:06:26.153661 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:06:26.153674 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:06:26.153693 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:06:26.153713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:06:26.153734 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:06:26.153797 systemd-journald[1098]: Collecting audit messages is disabled. Aug 13 07:06:26.153845 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:06:26.153867 systemd-journald[1098]: Journal started Aug 13 07:06:26.153915 systemd-journald[1098]: Runtime Journal (/run/log/journal/0c426f782ef94866b6b5d450521c6e66) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:06:25.868058 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:06:25.891163 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:06:25.891787 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:06:26.171164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:06:26.171266 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:06:26.171284 systemd[1]: Stopped verity-setup.service. Aug 13 07:06:26.178214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:26.188942 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:06:26.189941 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:06:26.190651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:06:26.191762 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:06:26.192245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:06:26.194398 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:06:26.195400 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:06:26.198211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:26.199228 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:06:26.199447 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:06:26.200468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:26.200685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:26.205862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:26.206069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:26.207971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:26.227033 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:06:26.237251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:06:26.240370 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:06:26.245172 kernel: ACPI: bus type drm_connector registered Aug 13 07:06:26.249339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:06:26.249884 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:06:26.249959 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:06:26.251781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:06:26.260388 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:06:26.265991 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:06:26.266673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:26.271347 kernel: fuse: init (API version 7.39) Aug 13 07:06:26.272448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:06:26.281403 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:06:26.288240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:26.295451 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:06:26.298711 kernel: loop: module loaded Aug 13 07:06:26.304782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:26.311438 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:06:26.319564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:06:26.324393 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:06:26.324821 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:06:26.325703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:06:26.326212 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:06:26.326910 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:26.327046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:26.328733 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:06:26.329603 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:06:26.354324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:06:26.355064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:26.374417 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:06:26.376028 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:06:26.405060 systemd-journald[1098]: Time spent on flushing to /var/log/journal/0c426f782ef94866b6b5d450521c6e66 is 48.227ms for 987 entries. Aug 13 07:06:26.405060 systemd-journald[1098]: System Journal (/var/log/journal/0c426f782ef94866b6b5d450521c6e66) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:06:26.472158 systemd-journald[1098]: Received client request to flush runtime journal. Aug 13 07:06:26.472233 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 07:06:26.425369 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:06:26.425980 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:06:26.437535 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:06:26.479499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:06:26.508653 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:06:26.525455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:26.532044 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:06:26.537334 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:06:26.549228 kernel: loop1: detected capacity change from 0 to 8 Aug 13 07:06:26.551477 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Aug 13 07:06:26.551499 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Aug 13 07:06:26.568849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:26.580411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:06:26.592497 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:06:26.655383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:26.677515 kernel: loop3: detected capacity change from 0 to 142488 Aug 13 07:06:26.673721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:06:26.699219 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:06:26.712546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:06:26.738539 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:06:26.782335 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 07:06:26.799139 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 13 07:06:26.800224 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 13 07:06:26.808189 kernel: loop5: detected capacity change from 0 to 8 Aug 13 07:06:26.814280 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:06:26.828436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:26.847339 kernel: loop7: detected capacity change from 0 to 142488 Aug 13 07:06:26.869558 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:06:26.870427 (sd-merge)[1177]: Merged extensions into '/usr'. Aug 13 07:06:26.882778 systemd[1]: Reloading requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:06:26.882807 systemd[1]: Reloading... Aug 13 07:06:27.030181 zram_generator::config[1201]: No configuration found. Aug 13 07:06:27.199876 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:06:27.271435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:27.350662 systemd[1]: Reloading finished in 467 ms. Aug 13 07:06:27.373388 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:06:27.377302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:06:27.388653 systemd[1]: Starting ensure-sysext.service... Aug 13 07:06:27.401381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:06:27.428056 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:06:27.428083 systemd[1]: Reloading... Aug 13 07:06:27.472167 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:06:27.474277 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:06:27.476787 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:06:27.478651 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Aug 13 07:06:27.478905 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Aug 13 07:06:27.487916 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:06:27.487934 systemd-tmpfiles[1248]: Skipping /boot Aug 13 07:06:27.531727 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:06:27.531749 systemd-tmpfiles[1248]: Skipping /boot Aug 13 07:06:27.552149 zram_generator::config[1278]: No configuration found. Aug 13 07:06:27.765655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:27.960658 systemd[1]: Reloading finished in 531 ms. Aug 13 07:06:27.987079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:06:27.998201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:28.021624 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:28.027523 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:06:28.037576 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:06:28.044583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:06:28.054671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:28.059286 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:06:28.063328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.063643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.071672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:28.081568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:28.090622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:28.091423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.091648 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.101559 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:06:28.104559 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.104868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.107201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.107440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.112177 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.112573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.124578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:06:28.126472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.126759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.128832 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:06:28.138615 systemd[1]: Finished ensure-sysext.service. Aug 13 07:06:28.155542 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:06:28.167466 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:06:28.176291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:06:28.177734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:28.177977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:28.179639 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:28.179862 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:28.192409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:28.192723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:28.198240 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:06:28.205890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:28.206004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:28.206056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:06:28.210756 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Aug 13 07:06:28.215914 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:06:28.216188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:06:28.226723 augenrules[1357]: No rules Aug 13 07:06:28.228908 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:28.230898 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:06:28.251764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:28.262396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:06:28.292491 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:06:28.409026 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:06:28.410336 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:06:28.457959 systemd-resolved[1330]: Positive Trust Anchors: Aug 13 07:06:28.457978 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:06:28.458022 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:06:28.467970 systemd-resolved[1330]: Using system hostname 'ci-4081.3.5-4-06119f59db'. Aug 13 07:06:28.470859 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:06:28.471416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:28.480539 systemd-networkd[1367]: lo: Link UP Aug 13 07:06:28.481637 systemd-networkd[1367]: lo: Gained carrier Aug 13 07:06:28.484345 systemd-networkd[1367]: Enumeration completed Aug 13 07:06:28.484516 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:06:28.485061 systemd[1]: Reached target network.target - Network. Aug 13 07:06:28.492991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:06:28.502317 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:06:28.521083 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-e6:fe:b7:74:4b:82.network. Aug 13 07:06:28.523269 systemd-networkd[1367]: eth1: Link UP Aug 13 07:06:28.523283 systemd-networkd[1367]: eth1: Gained carrier Aug 13 07:06:28.527203 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:28.542314 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:06:28.542710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.542878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.545915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:28.553472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:28.556307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:28.556987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.557045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:06:28.557070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.583152 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:06:28.587187 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:06:28.612665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:28.612968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:28.614252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:28.614706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:28.621520 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:28.621714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:28.623523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:28.623620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:28.635157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Aug 13 07:06:28.659838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:06:28.663498 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-8a:27:13:83:76:d7.network. Aug 13 07:06:28.665772 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:28.666401 systemd-networkd[1367]: eth0: Link UP Aug 13 07:06:28.666413 systemd-networkd[1367]: eth0: Gained carrier Aug 13 07:06:28.669051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:06:28.670636 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:28.672660 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:28.705177 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:06:28.710735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:06:28.716170 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:06:28.721151 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:06:28.754302 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:06:28.823523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:28.825724 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:06:28.857729 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:06:28.857843 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:06:28.860264 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:06:28.861170 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:06:28.861243 kernel: [drm] features: -context_init Aug 13 07:06:28.866157 kernel: [drm] number of scanouts: 1 Aug 13 07:06:28.866257 kernel: [drm] number of cap sets: 0 Aug 13 07:06:28.870144 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:06:28.887638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:28.887892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:28.919676 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:06:28.919808 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:06:28.926379 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:06:28.949808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:28.962505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:28.962778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:28.965363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:29.021582 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:06:29.051839 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:06:29.064695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:06:29.073395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:29.082744 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:06:29.113865 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:06:29.114537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:29.114656 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:06:29.114860 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:06:29.115023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:06:29.115442 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:06:29.115692 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:06:29.115810 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:06:29.115905 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:06:29.115945 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:06:29.116025 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:06:29.119821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:06:29.122298 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:06:29.129516 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:06:29.132947 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:06:29.137959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:06:29.140751 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:06:29.144613 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:06:29.145567 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:06:29.145629 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:06:29.158485 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:06:29.164437 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:06:29.170430 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:06:29.182784 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:06:29.194424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:06:29.200400 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:06:29.201270 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:06:29.212459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:06:29.218350 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:06:29.229466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:06:29.229956 jq[1438]: false Aug 13 07:06:29.240607 coreos-metadata[1436]: Aug 13 07:06:29.240 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:29.244408 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:06:29.258422 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:06:29.264998 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:06:29.267471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:06:29.269959 extend-filesystems[1441]: Found loop4 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found loop5 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found loop6 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found loop7 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda1 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda2 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda3 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found usr Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda4 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda6 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda7 Aug 13 07:06:29.269959 extend-filesystems[1441]: Found vda9 Aug 13 07:06:29.269959 extend-filesystems[1441]: Checking size of /dev/vda9 Aug 13 07:06:29.368434 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:06:29.368541 coreos-metadata[1436]: Aug 13 07:06:29.275 INFO Fetch successful Aug 13 07:06:29.275042 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:06:29.368869 extend-filesystems[1441]: Resized partition /dev/vda9 Aug 13 07:06:29.292654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:06:29.385959 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:06:29.298285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:06:29.313996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:06:29.314813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:06:29.333910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:06:29.413451 jq[1451]: true Aug 13 07:06:29.335427 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:06:29.411640 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:06:29.427859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Aug 13 07:06:29.427984 tar[1456]: linux-amd64/LICENSE Aug 13 07:06:29.427984 tar[1456]: linux-amd64/helm Aug 13 07:06:29.416927 dbus-daemon[1437]: [system] SELinux support is enabled Aug 13 07:06:29.417927 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:06:29.424608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:06:29.424660 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:06:29.426856 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:06:29.453560 jq[1466]: true Aug 13 07:06:29.426951 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:06:29.426971 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:06:29.517435 update_engine[1448]: I20250813 07:06:29.499025 1448 main.cc:92] Flatcar Update Engine starting Aug 13 07:06:29.511557 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:06:29.511756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:06:29.524842 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:06:29.527980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:06:29.540906 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:06:29.551911 update_engine[1448]: I20250813 07:06:29.551491 1448 update_check_scheduler.cc:74] Next update check in 2m53s Aug 13 07:06:29.553346 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:06:29.621152 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:06:29.647585 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:06:29.647585 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:06:29.647585 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:06:29.651538 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Aug 13 07:06:29.651538 extend-filesystems[1441]: Found vdb Aug 13 07:06:29.650445 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:06:29.652289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:06:29.655348 systemd-logind[1447]: New seat seat0. Aug 13 07:06:29.673571 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:06:29.673599 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:06:29.676388 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:06:29.685272 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:06:29.706194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:06:29.724095 systemd[1]: Starting sshkeys.service... Aug 13 07:06:29.747383 systemd-networkd[1367]: eth1: Gained IPv6LL Aug 13 07:06:29.761382 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:29.769726 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:06:29.791606 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:06:29.794449 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:06:29.800969 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:06:29.811257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:29.814050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:06:29.818249 systemd-networkd[1367]: eth0: Gained IPv6LL Aug 13 07:06:29.819506 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:29.924232 coreos-metadata[1508]: Aug 13 07:06:29.924 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:29.935041 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:06:29.937915 coreos-metadata[1508]: Aug 13 07:06:29.937 INFO Fetch successful Aug 13 07:06:29.940465 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:06:29.952914 unknown[1508]: wrote ssh authorized keys file for user: core Aug 13 07:06:29.956741 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:06:29.989228 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:06:29.992696 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:06:29.999793 systemd[1]: Finished sshkeys.service. Aug 13 07:06:30.006157 containerd[1472]: time="2025-08-13T07:06:30.004703032Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:06:30.038712 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:06:30.047591 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:06:30.075956 containerd[1472]: time="2025-08-13T07:06:30.074994568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082414890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082494420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082520968Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082734088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082781180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082886634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.082980 containerd[1472]: time="2025-08-13T07:06:30.082899987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.083471 containerd[1472]: time="2025-08-13T07:06:30.083449340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.084149 containerd[1472]: time="2025-08-13T07:06:30.083908628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.084765 containerd[1472]: time="2025-08-13T07:06:30.084256842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.084858 containerd[1472]: time="2025-08-13T07:06:30.084839271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.085530 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:06:30.086526 containerd[1472]: time="2025-08-13T07:06:30.085747320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.086526 containerd[1472]: time="2025-08-13T07:06:30.086039702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.090923 containerd[1472]: time="2025-08-13T07:06:30.086782014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.086884 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:06:30.092714 containerd[1472]: time="2025-08-13T07:06:30.092285936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:06:30.092714 containerd[1472]: time="2025-08-13T07:06:30.092492310Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:06:30.092714 containerd[1472]: time="2025-08-13T07:06:30.092547502Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:06:30.103235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:06:30.110276 containerd[1472]: time="2025-08-13T07:06:30.110210256Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:06:30.110412 containerd[1472]: time="2025-08-13T07:06:30.110299758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:06:30.110412 containerd[1472]: time="2025-08-13T07:06:30.110322078Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:06:30.110412 containerd[1472]: time="2025-08-13T07:06:30.110346464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:06:30.110412 containerd[1472]: time="2025-08-13T07:06:30.110367474Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:06:30.112564 containerd[1472]: time="2025-08-13T07:06:30.112462227Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:06:30.114478 containerd[1472]: time="2025-08-13T07:06:30.114423842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:06:30.114650 containerd[1472]: time="2025-08-13T07:06:30.114614893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:06:30.114650 containerd[1472]: time="2025-08-13T07:06:30.114635845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:06:30.114650 containerd[1472]: time="2025-08-13T07:06:30.114649630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114664395Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114677979Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114690664Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114704518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114718453Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114731494Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114743150Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114758691Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114778436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114792510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.114791 containerd[1472]: time="2025-08-13T07:06:30.114804947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114832153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114846915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114861858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114874029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114887935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114901551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114915307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114927442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114937705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114950248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114965490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114986528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.114997888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.116221 containerd[1472]: time="2025-08-13T07:06:30.115007786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116318939Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116370195Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116384009Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116397239Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116407464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116421002Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116433885Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:06:30.117819 containerd[1472]: time="2025-08-13T07:06:30.116446524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.118552 containerd[1472]: time="2025-08-13T07:06:30.116794055Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:06:30.118552 containerd[1472]: time="2025-08-13T07:06:30.116863537Z" level=info msg="Connect containerd service" Aug 13 07:06:30.118552 containerd[1472]: time="2025-08-13T07:06:30.116919906Z" level=info msg="using legacy CRI server" Aug 13 07:06:30.118552 containerd[1472]: time="2025-08-13T07:06:30.116928217Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:06:30.118552 containerd[1472]: time="2025-08-13T07:06:30.117050804Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:06:30.124343 containerd[1472]: time="2025-08-13T07:06:30.124040006Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:06:30.126402 containerd[1472]: time="2025-08-13T07:06:30.125781971Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:06:30.126402 containerd[1472]: time="2025-08-13T07:06:30.125846722Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137028869Z" level=info msg="Start subscribing containerd event" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137150425Z" level=info msg="Start recovering state" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137264095Z" level=info msg="Start event monitor" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137289580Z" level=info msg="Start snapshots syncer" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137300943Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137325845Z" level=info msg="Start streaming server" Aug 13 07:06:30.138935 containerd[1472]: time="2025-08-13T07:06:30.137511144Z" level=info msg="containerd successfully booted in 0.137789s" Aug 13 07:06:30.137706 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:06:30.159226 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:06:30.171717 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:06:30.181535 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:06:30.184084 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:06:30.593915 tar[1456]: linux-amd64/README.md Aug 13 07:06:30.612291 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:06:31.309458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:31.310279 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:06:31.310938 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:06:31.314745 systemd[1]: Startup finished in 1.093s (kernel) + 6.474s (initrd) + 6.166s (userspace) = 13.734s. Aug 13 07:06:32.035484 kubelet[1559]: E0813 07:06:32.035412 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:06:32.039106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:06:32.039317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:06:32.039694 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Aug 13 07:06:32.716078 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:06:32.724512 systemd[1]: Started sshd@0-64.23.220.168:22-139.178.89.65:37272.service - OpenSSH per-connection server daemon (139.178.89.65:37272). Aug 13 07:06:32.795737 sshd[1572]: Accepted publickey for core from 139.178.89.65 port 37272 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:32.798832 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:32.811836 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:06:32.816534 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:06:32.819364 systemd-logind[1447]: New session 1 of user core. Aug 13 07:06:32.850268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:06:32.864746 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:06:32.868744 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:06:32.995311 systemd[1576]: Queued start job for default target default.target. Aug 13 07:06:33.007616 systemd[1576]: Created slice app.slice - User Application Slice. Aug 13 07:06:33.007762 systemd[1576]: Reached target paths.target - Paths. Aug 13 07:06:33.007832 systemd[1576]: Reached target timers.target - Timers. Aug 13 07:06:33.010193 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:06:33.035529 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:06:33.035668 systemd[1576]: Reached target sockets.target - Sockets. Aug 13 07:06:33.035685 systemd[1576]: Reached target basic.target - Basic System. Aug 13 07:06:33.035730 systemd[1576]: Reached target default.target - Main User Target. Aug 13 07:06:33.035764 systemd[1576]: Startup finished in 157ms. Aug 13 07:06:33.035890 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:06:33.047466 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:06:33.118805 systemd[1]: Started sshd@1-64.23.220.168:22-139.178.89.65:37284.service - OpenSSH per-connection server daemon (139.178.89.65:37284). Aug 13 07:06:33.167462 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 37284 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.169839 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.178002 systemd-logind[1447]: New session 2 of user core. Aug 13 07:06:33.182450 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:06:33.244312 sshd[1587]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.254803 systemd[1]: sshd@1-64.23.220.168:22-139.178.89.65:37284.service: Deactivated successfully. Aug 13 07:06:33.256905 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:06:33.259343 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:06:33.268535 systemd[1]: Started sshd@2-64.23.220.168:22-139.178.89.65:37296.service - OpenSSH per-connection server daemon (139.178.89.65:37296). Aug 13 07:06:33.270962 systemd-logind[1447]: Removed session 2. Aug 13 07:06:33.311084 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 37296 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.313077 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.320560 systemd-logind[1447]: New session 3 of user core. Aug 13 07:06:33.326684 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:06:33.387262 sshd[1594]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.398321 systemd[1]: sshd@2-64.23.220.168:22-139.178.89.65:37296.service: Deactivated successfully. Aug 13 07:06:33.400195 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:06:33.402429 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:06:33.407519 systemd[1]: Started sshd@3-64.23.220.168:22-139.178.89.65:37308.service - OpenSSH per-connection server daemon (139.178.89.65:37308). Aug 13 07:06:33.409306 systemd-logind[1447]: Removed session 3. Aug 13 07:06:33.448649 sshd[1601]: Accepted publickey for core from 139.178.89.65 port 37308 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.451254 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.456560 systemd-logind[1447]: New session 4 of user core. Aug 13 07:06:33.462439 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:06:33.527899 sshd[1601]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.544378 systemd[1]: sshd@3-64.23.220.168:22-139.178.89.65:37308.service: Deactivated successfully. Aug 13 07:06:33.546324 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:06:33.548948 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:06:33.553697 systemd[1]: Started sshd@4-64.23.220.168:22-139.178.89.65:37310.service - OpenSSH per-connection server daemon (139.178.89.65:37310). Aug 13 07:06:33.557182 systemd-logind[1447]: Removed session 4. Aug 13 07:06:33.601157 sshd[1608]: Accepted publickey for core from 139.178.89.65 port 37310 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.603992 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.612274 systemd-logind[1447]: New session 5 of user core. Aug 13 07:06:33.619431 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:06:33.693447 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:06:33.693839 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:33.710161 sudo[1611]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:33.714577 sshd[1608]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.725392 systemd[1]: sshd@4-64.23.220.168:22-139.178.89.65:37310.service: Deactivated successfully. Aug 13 07:06:33.727786 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:06:33.730386 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:06:33.735664 systemd[1]: Started sshd@5-64.23.220.168:22-139.178.89.65:37318.service - OpenSSH per-connection server daemon (139.178.89.65:37318). Aug 13 07:06:33.737808 systemd-logind[1447]: Removed session 5. Aug 13 07:06:33.788821 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 37318 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.791917 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.798149 systemd-logind[1447]: New session 6 of user core. Aug 13 07:06:33.809436 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:06:33.870936 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:06:33.871333 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:33.875745 sudo[1620]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:33.882830 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:06:33.883477 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:33.907026 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:33.909253 auditctl[1623]: No rules Aug 13 07:06:33.909762 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:06:33.910289 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:33.917624 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:33.956472 augenrules[1641]: No rules Aug 13 07:06:33.958369 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:33.960020 sudo[1619]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:33.964403 sshd[1616]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.981410 systemd[1]: sshd@5-64.23.220.168:22-139.178.89.65:37318.service: Deactivated successfully. Aug 13 07:06:33.983991 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:06:33.986358 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:06:33.991625 systemd[1]: Started sshd@6-64.23.220.168:22-139.178.89.65:37326.service - OpenSSH per-connection server daemon (139.178.89.65:37326). Aug 13 07:06:33.994233 systemd-logind[1447]: Removed session 6. Aug 13 07:06:34.033386 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 37326 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:34.035252 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.042243 systemd-logind[1447]: New session 7 of user core. Aug 13 07:06:34.054421 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:06:34.115337 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:06:34.115660 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:34.566809 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:06:34.567293 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:06:35.056470 dockerd[1667]: time="2025-08-13T07:06:35.055885554Z" level=info msg="Starting up" Aug 13 07:06:35.187095 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2065240803-merged.mount: Deactivated successfully. Aug 13 07:06:35.196986 systemd[1]: var-lib-docker-metacopy\x2dcheck2268854323-merged.mount: Deactivated successfully. Aug 13 07:06:35.217597 dockerd[1667]: time="2025-08-13T07:06:35.217530472Z" level=info msg="Loading containers: start." Aug 13 07:06:35.373161 kernel: Initializing XFRM netlink socket Aug 13 07:06:35.409565 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:35.420049 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:35.480869 systemd-networkd[1367]: docker0: Link UP Aug 13 07:06:35.481201 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Aug 13 07:06:35.502640 dockerd[1667]: time="2025-08-13T07:06:35.502445795Z" level=info msg="Loading containers: done." Aug 13 07:06:35.523856 dockerd[1667]: time="2025-08-13T07:06:35.523159461Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:06:35.523856 dockerd[1667]: time="2025-08-13T07:06:35.523346567Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:06:35.523856 dockerd[1667]: time="2025-08-13T07:06:35.523533125Z" level=info msg="Daemon has completed initialization" Aug 13 07:06:35.561462 dockerd[1667]: time="2025-08-13T07:06:35.561343209Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:06:35.561842 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:06:36.551048 containerd[1472]: time="2025-08-13T07:06:36.550999979Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:06:37.171676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003714490.mount: Deactivated successfully. Aug 13 07:06:38.693923 containerd[1472]: time="2025-08-13T07:06:38.693846311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.695057 containerd[1472]: time="2025-08-13T07:06:38.695007257Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 07:06:38.695850 containerd[1472]: time="2025-08-13T07:06:38.695785830Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.699366 containerd[1472]: time="2025-08-13T07:06:38.699287933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.703797 containerd[1472]: time="2025-08-13T07:06:38.702991629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.151947402s" Aug 13 07:06:38.703797 containerd[1472]: time="2025-08-13T07:06:38.703055239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:06:38.704756 containerd[1472]: time="2025-08-13T07:06:38.704722936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:06:40.806887 containerd[1472]: time="2025-08-13T07:06:40.806817314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.809112 containerd[1472]: time="2025-08-13T07:06:40.807896084Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 07:06:40.809112 containerd[1472]: time="2025-08-13T07:06:40.809004789Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.812758 containerd[1472]: time="2025-08-13T07:06:40.812700847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.815282 containerd[1472]: time="2025-08-13T07:06:40.814327809Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 2.109558361s" Aug 13 07:06:40.815282 containerd[1472]: time="2025-08-13T07:06:40.814387648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:06:40.815282 containerd[1472]: time="2025-08-13T07:06:40.815248681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:06:42.211318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:06:42.221352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:42.421161 containerd[1472]: time="2025-08-13T07:06:42.420444108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:42.423264 containerd[1472]: time="2025-08-13T07:06:42.422501269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 07:06:42.424013 containerd[1472]: time="2025-08-13T07:06:42.423943851Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:42.433038 containerd[1472]: time="2025-08-13T07:06:42.431193112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:42.433827 containerd[1472]: time="2025-08-13T07:06:42.433133019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.617780848s" Aug 13 07:06:42.433939 containerd[1472]: time="2025-08-13T07:06:42.433833243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:06:42.434744 containerd[1472]: time="2025-08-13T07:06:42.434688150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:06:42.458487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:42.461087 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:06:42.533768 kubelet[1884]: E0813 07:06:42.533685 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:06:42.538878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:06:42.539152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:06:42.820582 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:06:43.597930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406336105.mount: Deactivated successfully. Aug 13 07:06:44.246164 containerd[1472]: time="2025-08-13T07:06:44.245951758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.247361 containerd[1472]: time="2025-08-13T07:06:44.247096253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 07:06:44.247938 containerd[1472]: time="2025-08-13T07:06:44.247902385Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.250146 containerd[1472]: time="2025-08-13T07:06:44.250087904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.251182 containerd[1472]: time="2025-08-13T07:06:44.251139529Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.815801576s" Aug 13 07:06:44.251320 containerd[1472]: time="2025-08-13T07:06:44.251301666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:06:44.251964 containerd[1472]: time="2025-08-13T07:06:44.251938546Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:06:44.759624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171649883.mount: Deactivated successfully. Aug 13 07:06:45.724350 containerd[1472]: time="2025-08-13T07:06:45.724273241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.725475 containerd[1472]: time="2025-08-13T07:06:45.725429558Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:06:45.726147 containerd[1472]: time="2025-08-13T07:06:45.725821977Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.730158 containerd[1472]: time="2025-08-13T07:06:45.729419234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.730993 containerd[1472]: time="2025-08-13T07:06:45.730728539Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.478758419s" Aug 13 07:06:45.730993 containerd[1472]: time="2025-08-13T07:06:45.730773058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:06:45.731520 containerd[1472]: time="2025-08-13T07:06:45.731299152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:06:45.876380 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:06:46.228717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217703335.mount: Deactivated successfully. Aug 13 07:06:46.234231 containerd[1472]: time="2025-08-13T07:06:46.233184097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:46.235069 containerd[1472]: time="2025-08-13T07:06:46.234820844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:06:46.235718 containerd[1472]: time="2025-08-13T07:06:46.235672343Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:46.239154 containerd[1472]: time="2025-08-13T07:06:46.238687472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:46.239972 containerd[1472]: time="2025-08-13T07:06:46.239672386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.344205ms" Aug 13 07:06:46.239972 containerd[1472]: time="2025-08-13T07:06:46.239723653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:06:46.240972 containerd[1472]: time="2025-08-13T07:06:46.240738750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:06:46.811423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766895530.mount: Deactivated successfully. Aug 13 07:06:48.671222 containerd[1472]: time="2025-08-13T07:06:48.670438128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:48.673179 containerd[1472]: time="2025-08-13T07:06:48.673076269Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 07:06:48.676392 containerd[1472]: time="2025-08-13T07:06:48.676176176Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:48.679039 containerd[1472]: time="2025-08-13T07:06:48.678955069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:48.680423 containerd[1472]: time="2025-08-13T07:06:48.680211781Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.439436781s" Aug 13 07:06:48.680423 containerd[1472]: time="2025-08-13T07:06:48.680258618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:06:51.893321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:51.903519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:51.946317 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... Aug 13 07:06:51.946342 systemd[1]: Reloading... Aug 13 07:06:52.085170 zram_generator::config[2072]: No configuration found. Aug 13 07:06:52.229825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:52.316922 systemd[1]: Reloading finished in 370 ms. Aug 13 07:06:52.380053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:52.385377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:52.389064 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:06:52.389393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:52.395688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:52.584454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:52.585648 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:06:52.658188 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:52.658721 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:06:52.658797 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:52.659028 kubelet[2131]: I0813 07:06:52.658983 2131 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:06:53.064902 kubelet[2131]: I0813 07:06:53.064848 2131 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:06:53.064902 kubelet[2131]: I0813 07:06:53.064891 2131 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:06:53.065422 kubelet[2131]: I0813 07:06:53.065274 2131 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:06:53.106475 kubelet[2131]: I0813 07:06:53.105004 2131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:53.106475 kubelet[2131]: E0813 07:06:53.106360 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.220.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:53.115293 kubelet[2131]: E0813 07:06:53.114384 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:06:53.115293 kubelet[2131]: I0813 07:06:53.114430 2131 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:06:53.118940 kubelet[2131]: I0813 07:06:53.118875 2131 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:06:53.123327 kubelet[2131]: I0813 07:06:53.123229 2131 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:06:53.123615 kubelet[2131]: I0813 07:06:53.123316 2131 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-4-06119f59db","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:06:53.123615 kubelet[2131]: I0813 07:06:53.123608 2131 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:06:53.123615 kubelet[2131]: I0813 07:06:53.123627 2131 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:06:53.123836 kubelet[2131]: I0813 07:06:53.123821 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:53.128154 kubelet[2131]: I0813 07:06:53.127941 2131 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:06:53.128154 kubelet[2131]: I0813 07:06:53.128001 2131 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:06:53.128154 kubelet[2131]: I0813 07:06:53.128034 2131 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:06:53.128154 kubelet[2131]: I0813 07:06:53.128057 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:06:53.136096 kubelet[2131]: W0813 07:06:53.135194 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.220.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-4-06119f59db&limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:53.136096 kubelet[2131]: E0813 07:06:53.135262 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.220.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-4-06119f59db&limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:53.136096 kubelet[2131]: W0813 07:06:53.135675 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.220.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:53.136096 kubelet[2131]: E0813 07:06:53.135709 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.220.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:53.136563 kubelet[2131]: I0813 07:06:53.136540 2131 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:06:53.141022 kubelet[2131]: I0813 07:06:53.140986 2131 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:06:53.141820 kubelet[2131]: W0813 07:06:53.141789 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:06:53.144256 kubelet[2131]: I0813 07:06:53.144224 2131 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:06:53.144354 kubelet[2131]: I0813 07:06:53.144277 2131 server.go:1287] "Started kubelet" Aug 13 07:06:53.144789 kubelet[2131]: I0813 07:06:53.144745 2131 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:06:53.153926 kubelet[2131]: I0813 07:06:53.153821 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:06:53.154475 kubelet[2131]: I0813 07:06:53.154444 2131 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:06:53.155069 kubelet[2131]: I0813 07:06:53.155050 2131 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:06:53.160074 kubelet[2131]: I0813 07:06:53.160046 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:06:53.163399 kubelet[2131]: E0813 07:06:53.159705 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.220.168:6443/api/v1/namespaces/default/events\": dial tcp 64.23.220.168:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-4-06119f59db.185b41c96bd450d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-4-06119f59db,UID:ci-4081.3.5-4-06119f59db,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-4-06119f59db,},FirstTimestamp:2025-08-13 07:06:53.144248533 +0000 UTC m=+0.551723472,LastTimestamp:2025-08-13 07:06:53.144248533 +0000 UTC m=+0.551723472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-4-06119f59db,}" Aug 13 07:06:53.164190 kubelet[2131]: I0813 07:06:53.163951 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:06:53.170842 kubelet[2131]: E0813 07:06:53.169646 2131 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-4-06119f59db\" not found" Aug 13 07:06:53.170842 kubelet[2131]: I0813 07:06:53.169725 2131 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:06:53.170842 kubelet[2131]: I0813 07:06:53.170084 2131 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:06:53.170842 kubelet[2131]: I0813 07:06:53.170532 2131 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:06:53.175573 kubelet[2131]: W0813 07:06:53.175486 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.220.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:53.175716 kubelet[2131]: E0813 07:06:53.175588 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.220.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:53.175927 kubelet[2131]: I0813 07:06:53.175899 2131 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:06:53.176051 kubelet[2131]: I0813 07:06:53.176025 2131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:06:53.176724 kubelet[2131]: E0813 07:06:53.176665 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.220.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-4-06119f59db?timeout=10s\": dial tcp 64.23.220.168:6443: connect: connection refused" interval="200ms" Aug 13 07:06:53.183001 kubelet[2131]: E0813 07:06:53.182961 2131 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:06:53.183218 kubelet[2131]: I0813 07:06:53.183198 2131 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:06:53.197111 kubelet[2131]: I0813 07:06:53.197046 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:06:53.198793 kubelet[2131]: I0813 07:06:53.198749 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:06:53.198793 kubelet[2131]: I0813 07:06:53.198792 2131 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:06:53.198953 kubelet[2131]: I0813 07:06:53.198833 2131 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:06:53.198953 kubelet[2131]: I0813 07:06:53.198847 2131 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:06:53.198953 kubelet[2131]: E0813 07:06:53.198937 2131 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:06:53.208089 kubelet[2131]: W0813 07:06:53.207957 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.220.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:53.208320 kubelet[2131]: E0813 07:06:53.208056 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.220.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:53.215968 kubelet[2131]: I0813 07:06:53.215939 2131 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:06:53.216480 kubelet[2131]: I0813 07:06:53.216213 2131 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:06:53.216480 kubelet[2131]: I0813 07:06:53.216240 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:53.218549 kubelet[2131]: I0813 07:06:53.218236 2131 policy_none.go:49] "None policy: Start" Aug 13 07:06:53.218549 kubelet[2131]: I0813 07:06:53.218264 2131 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:06:53.218549 kubelet[2131]: I0813 07:06:53.218277 2131 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:06:53.224765 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:06:53.243374 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:06:53.248463 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:06:53.259252 kubelet[2131]: I0813 07:06:53.258436 2131 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:06:53.259252 kubelet[2131]: I0813 07:06:53.258675 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:06:53.259252 kubelet[2131]: I0813 07:06:53.258701 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:06:53.259510 kubelet[2131]: I0813 07:06:53.259492 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:06:53.260929 kubelet[2131]: E0813 07:06:53.260898 2131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:06:53.261405 kubelet[2131]: E0813 07:06:53.261288 2131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-4-06119f59db\" not found" Aug 13 07:06:53.311397 systemd[1]: Created slice kubepods-burstable-pod0402137896ae3b92698734bdf8e07f54.slice - libcontainer container kubepods-burstable-pod0402137896ae3b92698734bdf8e07f54.slice. Aug 13 07:06:53.316814 systemd[1]: Created slice kubepods-burstable-podce4413ce342c31709d41978c70b31902.slice - libcontainer container kubepods-burstable-podce4413ce342c31709d41978c70b31902.slice. Aug 13 07:06:53.331835 kubelet[2131]: E0813 07:06:53.331771 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.339585 kubelet[2131]: E0813 07:06:53.339527 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.344044 systemd[1]: Created slice kubepods-burstable-pod27570259f978180c301519d01081831a.slice - libcontainer container kubepods-burstable-pod27570259f978180c301519d01081831a.slice. Aug 13 07:06:53.346970 kubelet[2131]: E0813 07:06:53.346850 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.360075 kubelet[2131]: I0813 07:06:53.359981 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.360488 kubelet[2131]: E0813 07:06:53.360459 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.220.168:6443/api/v1/nodes\": dial tcp 64.23.220.168:6443: connect: connection refused" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372222 kubelet[2131]: I0813 07:06:53.371932 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372222 kubelet[2131]: I0813 07:06:53.372000 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372222 kubelet[2131]: I0813 07:06:53.372025 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372222 kubelet[2131]: I0813 07:06:53.372076 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372222 kubelet[2131]: I0813 07:06:53.372152 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372584 kubelet[2131]: I0813 07:06:53.372189 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0402137896ae3b92698734bdf8e07f54-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-4-06119f59db\" (UID: \"0402137896ae3b92698734bdf8e07f54\") " pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372584 kubelet[2131]: I0813 07:06:53.372215 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372584 kubelet[2131]: I0813 07:06:53.372244 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.372584 kubelet[2131]: I0813 07:06:53.372264 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.377612 kubelet[2131]: E0813 07:06:53.377561 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.220.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-4-06119f59db?timeout=10s\": dial tcp 64.23.220.168:6443: connect: connection refused" interval="400ms" Aug 13 07:06:53.561859 kubelet[2131]: I0813 07:06:53.561812 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.562465 kubelet[2131]: E0813 07:06:53.562428 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.220.168:6443/api/v1/nodes\": dial tcp 64.23.220.168:6443: connect: connection refused" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.633730 kubelet[2131]: E0813 07:06:53.633228 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:53.634671 containerd[1472]: time="2025-08-13T07:06:53.634306799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-4-06119f59db,Uid:0402137896ae3b92698734bdf8e07f54,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:53.640491 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:06:53.641245 kubelet[2131]: E0813 07:06:53.640945 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:53.642227 containerd[1472]: time="2025-08-13T07:06:53.641590858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-4-06119f59db,Uid:ce4413ce342c31709d41978c70b31902,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:53.648408 kubelet[2131]: E0813 07:06:53.648360 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:53.649480 containerd[1472]: time="2025-08-13T07:06:53.649031183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-4-06119f59db,Uid:27570259f978180c301519d01081831a,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:53.778980 kubelet[2131]: E0813 07:06:53.778924 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.220.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-4-06119f59db?timeout=10s\": dial tcp 64.23.220.168:6443: connect: connection refused" interval="800ms" Aug 13 07:06:53.964664 kubelet[2131]: I0813 07:06:53.964597 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:53.965104 kubelet[2131]: E0813 07:06:53.965061 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.220.168:6443/api/v1/nodes\": dial tcp 64.23.220.168:6443: connect: connection refused" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:54.119258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851635182.mount: Deactivated successfully. Aug 13 07:06:54.124242 containerd[1472]: time="2025-08-13T07:06:54.123728309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:54.125373 containerd[1472]: time="2025-08-13T07:06:54.125328637Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:54.127188 containerd[1472]: time="2025-08-13T07:06:54.127141323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:06:54.127658 containerd[1472]: time="2025-08-13T07:06:54.127589429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:06:54.128530 containerd[1472]: time="2025-08-13T07:06:54.128491687Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:54.130069 containerd[1472]: time="2025-08-13T07:06:54.130016656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:06:54.134371 containerd[1472]: time="2025-08-13T07:06:54.134304501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:54.136443 containerd[1472]: time="2025-08-13T07:06:54.136363331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.974088ms" Aug 13 07:06:54.138480 containerd[1472]: time="2025-08-13T07:06:54.138442252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:54.139950 containerd[1472]: time="2025-08-13T07:06:54.139914843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.775713ms" Aug 13 07:06:54.141629 containerd[1472]: time="2025-08-13T07:06:54.141593622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.921023ms" Aug 13 07:06:54.303881 containerd[1472]: time="2025-08-13T07:06:54.303641244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:54.305072 containerd[1472]: time="2025-08-13T07:06:54.304542159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:54.306016 containerd[1472]: time="2025-08-13T07:06:54.305716360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.306016 containerd[1472]: time="2025-08-13T07:06:54.305887578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.307626 containerd[1472]: time="2025-08-13T07:06:54.306654585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:54.307626 containerd[1472]: time="2025-08-13T07:06:54.306720448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:54.307626 containerd[1472]: time="2025-08-13T07:06:54.306733100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.307626 containerd[1472]: time="2025-08-13T07:06:54.306817559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.318647 containerd[1472]: time="2025-08-13T07:06:54.318379043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:54.318647 containerd[1472]: time="2025-08-13T07:06:54.318444659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:54.318647 containerd[1472]: time="2025-08-13T07:06:54.318459415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.318647 containerd[1472]: time="2025-08-13T07:06:54.318551273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:54.346372 systemd[1]: Started cri-containerd-c08a753071757b4fc22bb7560407a644b62d24ecf4557f529af9b15f474e0216.scope - libcontainer container c08a753071757b4fc22bb7560407a644b62d24ecf4557f529af9b15f474e0216. Aug 13 07:06:54.353033 systemd[1]: Started cri-containerd-2924b37467a8ecf179d4473cfe62d4e3a26af2dfd9371e6741aaaa2dbf66b9e0.scope - libcontainer container 2924b37467a8ecf179d4473cfe62d4e3a26af2dfd9371e6741aaaa2dbf66b9e0. Aug 13 07:06:54.359294 systemd[1]: Started cri-containerd-efb2d9e6bb99b4b32fcbbcfc3b70cc8a143c4168d9c9282d6387abc1d0beaddd.scope - libcontainer container efb2d9e6bb99b4b32fcbbcfc3b70cc8a143c4168d9c9282d6387abc1d0beaddd. Aug 13 07:06:54.425059 kubelet[2131]: W0813 07:06:54.424896 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.220.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-4-06119f59db&limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:54.425059 kubelet[2131]: E0813 07:06:54.424991 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.220.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-4-06119f59db&limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:54.445817 containerd[1472]: time="2025-08-13T07:06:54.445383354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-4-06119f59db,Uid:ce4413ce342c31709d41978c70b31902,Namespace:kube-system,Attempt:0,} returns sandbox id \"c08a753071757b4fc22bb7560407a644b62d24ecf4557f529af9b15f474e0216\"" Aug 13 07:06:54.447543 kubelet[2131]: E0813 07:06:54.447463 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:54.454882 containerd[1472]: time="2025-08-13T07:06:54.454708249Z" level=info msg="CreateContainer within sandbox \"c08a753071757b4fc22bb7560407a644b62d24ecf4557f529af9b15f474e0216\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:06:54.462176 containerd[1472]: time="2025-08-13T07:06:54.462043404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-4-06119f59db,Uid:27570259f978180c301519d01081831a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2924b37467a8ecf179d4473cfe62d4e3a26af2dfd9371e6741aaaa2dbf66b9e0\"" Aug 13 07:06:54.464044 kubelet[2131]: E0813 07:06:54.463510 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:54.467605 containerd[1472]: time="2025-08-13T07:06:54.467550148Z" level=info msg="CreateContainer within sandbox \"2924b37467a8ecf179d4473cfe62d4e3a26af2dfd9371e6741aaaa2dbf66b9e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:06:54.475975 containerd[1472]: time="2025-08-13T07:06:54.475349391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-4-06119f59db,Uid:0402137896ae3b92698734bdf8e07f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"efb2d9e6bb99b4b32fcbbcfc3b70cc8a143c4168d9c9282d6387abc1d0beaddd\"" Aug 13 07:06:54.478186 kubelet[2131]: E0813 07:06:54.478079 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:54.482676 containerd[1472]: time="2025-08-13T07:06:54.482632123Z" level=info msg="CreateContainer within sandbox \"c08a753071757b4fc22bb7560407a644b62d24ecf4557f529af9b15f474e0216\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0ffcb5219c3cd13216ffedbec1e903c334c456a43f7e861ea4c7f4559232877\"" Aug 13 07:06:54.483700 containerd[1472]: time="2025-08-13T07:06:54.482927362Z" level=info msg="CreateContainer within sandbox \"efb2d9e6bb99b4b32fcbbcfc3b70cc8a143c4168d9c9282d6387abc1d0beaddd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:06:54.484287 containerd[1472]: time="2025-08-13T07:06:54.484243281Z" level=info msg="StartContainer for \"c0ffcb5219c3cd13216ffedbec1e903c334c456a43f7e861ea4c7f4559232877\"" Aug 13 07:06:54.496848 containerd[1472]: time="2025-08-13T07:06:54.495950437Z" level=info msg="CreateContainer within sandbox \"2924b37467a8ecf179d4473cfe62d4e3a26af2dfd9371e6741aaaa2dbf66b9e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4bf22bb810ca3a8f1e4613646bac17ed02757bb21fe834731d23d28fe1e814a\"" Aug 13 07:06:54.497789 containerd[1472]: time="2025-08-13T07:06:54.497605449Z" level=info msg="StartContainer for \"b4bf22bb810ca3a8f1e4613646bac17ed02757bb21fe834731d23d28fe1e814a\"" Aug 13 07:06:54.512393 containerd[1472]: time="2025-08-13T07:06:54.512302298Z" level=info msg="CreateContainer within sandbox \"efb2d9e6bb99b4b32fcbbcfc3b70cc8a143c4168d9c9282d6387abc1d0beaddd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"104b72c1bbfc57d4a9dfd6b33caa17c382aabecc3ec883f3c619079bc53457f5\"" Aug 13 07:06:54.513953 containerd[1472]: time="2025-08-13T07:06:54.513272437Z" level=info msg="StartContainer for \"104b72c1bbfc57d4a9dfd6b33caa17c382aabecc3ec883f3c619079bc53457f5\"" Aug 13 07:06:54.539428 systemd[1]: Started cri-containerd-c0ffcb5219c3cd13216ffedbec1e903c334c456a43f7e861ea4c7f4559232877.scope - libcontainer container c0ffcb5219c3cd13216ffedbec1e903c334c456a43f7e861ea4c7f4559232877. Aug 13 07:06:54.575639 kubelet[2131]: W0813 07:06:54.574142 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.220.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:54.575068 systemd[1]: Started cri-containerd-b4bf22bb810ca3a8f1e4613646bac17ed02757bb21fe834731d23d28fe1e814a.scope - libcontainer container b4bf22bb810ca3a8f1e4613646bac17ed02757bb21fe834731d23d28fe1e814a. Aug 13 07:06:54.577587 kubelet[2131]: E0813 07:06:54.577363 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.220.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:54.581414 kubelet[2131]: E0813 07:06:54.581362 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.220.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-4-06119f59db?timeout=10s\": dial tcp 64.23.220.168:6443: connect: connection refused" interval="1.6s" Aug 13 07:06:54.601444 systemd[1]: Started cri-containerd-104b72c1bbfc57d4a9dfd6b33caa17c382aabecc3ec883f3c619079bc53457f5.scope - libcontainer container 104b72c1bbfc57d4a9dfd6b33caa17c382aabecc3ec883f3c619079bc53457f5. Aug 13 07:06:54.627390 containerd[1472]: time="2025-08-13T07:06:54.627301063Z" level=info msg="StartContainer for \"c0ffcb5219c3cd13216ffedbec1e903c334c456a43f7e861ea4c7f4559232877\" returns successfully" Aug 13 07:06:54.648173 kubelet[2131]: W0813 07:06:54.648052 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.220.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:54.648173 kubelet[2131]: E0813 07:06:54.648139 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.220.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:54.690901 containerd[1472]: time="2025-08-13T07:06:54.690531882Z" level=info msg="StartContainer for \"b4bf22bb810ca3a8f1e4613646bac17ed02757bb21fe834731d23d28fe1e814a\" returns successfully" Aug 13 07:06:54.704776 kubelet[2131]: W0813 07:06:54.704611 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.220.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.220.168:6443: connect: connection refused Aug 13 07:06:54.704776 kubelet[2131]: E0813 07:06:54.704709 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.220.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.220.168:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:06:54.707699 containerd[1472]: time="2025-08-13T07:06:54.707636174Z" level=info msg="StartContainer for \"104b72c1bbfc57d4a9dfd6b33caa17c382aabecc3ec883f3c619079bc53457f5\" returns successfully" Aug 13 07:06:54.768389 kubelet[2131]: I0813 07:06:54.767721 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:54.769058 kubelet[2131]: E0813 07:06:54.769012 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.220.168:6443/api/v1/nodes\": dial tcp 64.23.220.168:6443: connect: connection refused" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:55.015425 kubelet[2131]: E0813 07:06:55.015256 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.220.168:6443/api/v1/namespaces/default/events\": dial tcp 64.23.220.168:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-4-06119f59db.185b41c96bd450d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-4-06119f59db,UID:ci-4081.3.5-4-06119f59db,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-4-06119f59db,},FirstTimestamp:2025-08-13 07:06:53.144248533 +0000 UTC m=+0.551723472,LastTimestamp:2025-08-13 07:06:53.144248533 +0000 UTC m=+0.551723472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-4-06119f59db,}" Aug 13 07:06:55.231388 kubelet[2131]: E0813 07:06:55.230931 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:55.233082 kubelet[2131]: E0813 07:06:55.232770 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:55.235824 kubelet[2131]: E0813 07:06:55.235443 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:55.235824 kubelet[2131]: E0813 07:06:55.235648 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:55.239753 kubelet[2131]: E0813 07:06:55.239432 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:55.239753 kubelet[2131]: E0813 07:06:55.239625 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:56.241131 kubelet[2131]: E0813 07:06:56.240858 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:56.241131 kubelet[2131]: E0813 07:06:56.240997 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:56.241724 kubelet[2131]: E0813 07:06:56.241360 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:56.241724 kubelet[2131]: E0813 07:06:56.241582 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:56.242408 kubelet[2131]: E0813 07:06:56.242106 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:56.242408 kubelet[2131]: E0813 07:06:56.242336 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:56.373239 kubelet[2131]: I0813 07:06:56.372313 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:56.798241 kubelet[2131]: E0813 07:06:56.798197 2131 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-4-06119f59db\" not found" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.041089 kubelet[2131]: I0813 07:06:57.041038 2131 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.041089 kubelet[2131]: E0813 07:06:57.041096 2131 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-4-06119f59db\": node \"ci-4081.3.5-4-06119f59db\" not found" Aug 13 07:06:57.074868 kubelet[2131]: I0813 07:06:57.074683 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.087259 kubelet[2131]: E0813 07:06:57.087203 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-4-06119f59db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.087259 kubelet[2131]: I0813 07:06:57.087255 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.090774 kubelet[2131]: E0813 07:06:57.090714 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.090774 kubelet[2131]: I0813 07:06:57.090762 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.093669 kubelet[2131]: E0813 07:06:57.093609 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-4-06119f59db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.139519 kubelet[2131]: I0813 07:06:57.139167 2131 apiserver.go:52] "Watching apiserver" Aug 13 07:06:57.171127 kubelet[2131]: I0813 07:06:57.170921 2131 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:06:57.241695 kubelet[2131]: I0813 07:06:57.241624 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.244950 kubelet[2131]: E0813 07:06:57.244895 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-4-06119f59db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:06:57.245213 kubelet[2131]: E0813 07:06:57.245173 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:59.054474 systemd[1]: Reloading requested from client PID 2406 ('systemctl') (unit session-7.scope)... Aug 13 07:06:59.054494 systemd[1]: Reloading... Aug 13 07:06:59.156173 zram_generator::config[2445]: No configuration found. Aug 13 07:06:59.296241 kubelet[2131]: I0813 07:06:59.295571 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:06:59.313188 kubelet[2131]: W0813 07:06:59.311915 2131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:06:59.313725 kubelet[2131]: E0813 07:06:59.313597 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:06:59.392056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:59.518960 systemd[1]: Reloading finished in 463 ms. Aug 13 07:06:59.569331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:59.588773 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:06:59.589140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:59.589227 systemd[1]: kubelet.service: Consumed 1.067s CPU time, 128.0M memory peak, 0B memory swap peak. Aug 13 07:06:59.593727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:59.769373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:59.784500 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:06:59.888219 kubelet[2496]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:59.888219 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:06:59.888219 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:59.888219 kubelet[2496]: I0813 07:06:59.887951 2496 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:06:59.900728 kubelet[2496]: I0813 07:06:59.900661 2496 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:06:59.903169 kubelet[2496]: I0813 07:06:59.900950 2496 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:06:59.903169 kubelet[2496]: I0813 07:06:59.901465 2496 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:06:59.903917 kubelet[2496]: I0813 07:06:59.903880 2496 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:06:59.908975 kubelet[2496]: I0813 07:06:59.908722 2496 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:59.913195 kubelet[2496]: E0813 07:06:59.913141 2496 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:06:59.915175 kubelet[2496]: I0813 07:06:59.913376 2496 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:06:59.917405 kubelet[2496]: I0813 07:06:59.917371 2496 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:06:59.917894 kubelet[2496]: I0813 07:06:59.917861 2496 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:06:59.918299 kubelet[2496]: I0813 07:06:59.918013 2496 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-4-06119f59db","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:06:59.918512 kubelet[2496]: I0813 07:06:59.918492 2496 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:06:59.918598 kubelet[2496]: I0813 07:06:59.918588 2496 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:06:59.918693 kubelet[2496]: I0813 07:06:59.918686 2496 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:59.918943 kubelet[2496]: I0813 07:06:59.918932 2496 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:06:59.919798 kubelet[2496]: I0813 07:06:59.919716 2496 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:06:59.919939 kubelet[2496]: I0813 07:06:59.919929 2496 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:06:59.920015 kubelet[2496]: I0813 07:06:59.920006 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:06:59.932146 kubelet[2496]: I0813 07:06:59.928856 2496 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:06:59.939021 kubelet[2496]: I0813 07:06:59.938979 2496 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:06:59.941173 kubelet[2496]: I0813 07:06:59.939567 2496 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:06:59.941173 kubelet[2496]: I0813 07:06:59.939605 2496 server.go:1287] "Started kubelet" Aug 13 07:06:59.945164 kubelet[2496]: I0813 07:06:59.943034 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:06:59.945691 kubelet[2496]: I0813 07:06:59.945641 2496 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:06:59.946814 kubelet[2496]: I0813 07:06:59.946779 2496 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:06:59.950242 kubelet[2496]: I0813 07:06:59.950168 2496 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:06:59.951177 kubelet[2496]: I0813 07:06:59.950597 2496 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:06:59.955336 kubelet[2496]: I0813 07:06:59.954971 2496 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:06:59.957517 kubelet[2496]: I0813 07:06:59.957490 2496 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:06:59.963671 kubelet[2496]: I0813 07:06:59.963633 2496 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:06:59.963791 kubelet[2496]: I0813 07:06:59.963775 2496 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:06:59.966703 kubelet[2496]: I0813 07:06:59.966446 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:06:59.970078 kubelet[2496]: I0813 07:06:59.968681 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:06:59.970078 kubelet[2496]: I0813 07:06:59.968730 2496 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:06:59.970078 kubelet[2496]: I0813 07:06:59.968760 2496 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:06:59.970078 kubelet[2496]: I0813 07:06:59.968769 2496 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:06:59.970078 kubelet[2496]: E0813 07:06:59.968830 2496 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:06:59.972800 kubelet[2496]: I0813 07:06:59.972623 2496 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:06:59.972800 kubelet[2496]: I0813 07:06:59.972738 2496 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:06:59.977293 kubelet[2496]: E0813 07:06:59.977028 2496 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:06:59.977293 kubelet[2496]: I0813 07:06:59.977184 2496 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:07:00.061048 kubelet[2496]: I0813 07:07:00.061011 2496 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:07:00.061048 kubelet[2496]: I0813 07:07:00.061035 2496 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:07:00.061048 kubelet[2496]: I0813 07:07:00.061062 2496 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:07:00.061335 kubelet[2496]: I0813 07:07:00.061313 2496 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:07:00.061379 kubelet[2496]: I0813 07:07:00.061335 2496 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:07:00.061379 kubelet[2496]: I0813 07:07:00.061362 2496 policy_none.go:49] "None policy: Start" Aug 13 07:07:00.061379 kubelet[2496]: I0813 07:07:00.061378 2496 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:07:00.061452 kubelet[2496]: I0813 07:07:00.061394 2496 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:07:00.061562 kubelet[2496]: I0813 07:07:00.061541 2496 state_mem.go:75] "Updated machine memory state" Aug 13 07:07:00.070492 kubelet[2496]: E0813 07:07:00.070443 2496 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:07:00.070671 kubelet[2496]: I0813 07:07:00.070654 2496 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:07:00.070908 kubelet[2496]: I0813 07:07:00.070891 2496 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:07:00.070965 kubelet[2496]: I0813 07:07:00.070913 2496 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:07:00.072139 kubelet[2496]: I0813 07:07:00.071433 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:07:00.076035 kubelet[2496]: E0813 07:07:00.075246 2496 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:07:00.185756 kubelet[2496]: I0813 07:07:00.185591 2496 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.206937 kubelet[2496]: I0813 07:07:00.206890 2496 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.207616 kubelet[2496]: I0813 07:07:00.207358 2496 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.273739 kubelet[2496]: I0813 07:07:00.273532 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.276773 kubelet[2496]: I0813 07:07:00.276421 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.279162 kubelet[2496]: I0813 07:07:00.277198 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.293979 kubelet[2496]: W0813 07:07:00.293636 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:07:00.297782 kubelet[2496]: W0813 07:07:00.297515 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:07:00.299219 kubelet[2496]: W0813 07:07:00.299040 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:07:00.299219 kubelet[2496]: E0813 07:07:00.299182 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-4-06119f59db\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.366670 kubelet[2496]: I0813 07:07:00.366480 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.366670 kubelet[2496]: I0813 07:07:00.366549 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0402137896ae3b92698734bdf8e07f54-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-4-06119f59db\" (UID: \"0402137896ae3b92698734bdf8e07f54\") " pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.366670 kubelet[2496]: I0813 07:07:00.366575 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.366670 kubelet[2496]: I0813 07:07:00.366595 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27570259f978180c301519d01081831a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-4-06119f59db\" (UID: \"27570259f978180c301519d01081831a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.366670 kubelet[2496]: I0813 07:07:00.366620 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.367047 kubelet[2496]: I0813 07:07:00.366642 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.367047 kubelet[2496]: I0813 07:07:00.366658 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.367047 kubelet[2496]: I0813 07:07:00.366674 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.367047 kubelet[2496]: I0813 07:07:00.366697 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4413ce342c31709d41978c70b31902-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-4-06119f59db\" (UID: \"ce4413ce342c31709d41978c70b31902\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" Aug 13 07:07:00.595830 kubelet[2496]: E0813 07:07:00.595773 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:00.598939 kubelet[2496]: E0813 07:07:00.598814 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:00.600811 kubelet[2496]: E0813 07:07:00.600622 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:00.939133 kubelet[2496]: I0813 07:07:00.936915 2496 apiserver.go:52] "Watching apiserver" Aug 13 07:07:00.965430 kubelet[2496]: I0813 07:07:00.964769 2496 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:07:01.014857 kubelet[2496]: E0813 07:07:01.014814 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:01.017281 kubelet[2496]: I0813 07:07:01.015214 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:01.017281 kubelet[2496]: I0813 07:07:01.015698 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:07:01.030583 kubelet[2496]: W0813 07:07:01.030349 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:07:01.030583 kubelet[2496]: E0813 07:07:01.030438 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-4-06119f59db\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" Aug 13 07:07:01.030766 kubelet[2496]: E0813 07:07:01.030713 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:01.038383 kubelet[2496]: W0813 07:07:01.038071 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:07:01.038383 kubelet[2496]: E0813 07:07:01.038182 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-4-06119f59db\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" Aug 13 07:07:01.038569 kubelet[2496]: E0813 07:07:01.038450 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:01.065083 kubelet[2496]: I0813 07:07:01.064839 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-4-06119f59db" podStartSLOduration=1.064820695 podStartE2EDuration="1.064820695s" podCreationTimestamp="2025-08-13 07:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:01.064801625 +0000 UTC m=+1.271218077" watchObservedRunningTime="2025-08-13 07:07:01.064820695 +0000 UTC m=+1.271237191" Aug 13 07:07:01.084410 kubelet[2496]: I0813 07:07:01.084248 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-4-06119f59db" podStartSLOduration=2.084225484 podStartE2EDuration="2.084225484s" podCreationTimestamp="2025-08-13 07:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:01.084178105 +0000 UTC m=+1.290594556" watchObservedRunningTime="2025-08-13 07:07:01.084225484 +0000 UTC m=+1.290641932" Aug 13 07:07:01.103460 kubelet[2496]: I0813 07:07:01.103377 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-4-06119f59db" podStartSLOduration=1.103353271 podStartE2EDuration="1.103353271s" podCreationTimestamp="2025-08-13 07:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:01.10325688 +0000 UTC m=+1.309673331" watchObservedRunningTime="2025-08-13 07:07:01.103353271 +0000 UTC m=+1.309769724" Aug 13 07:07:02.017075 kubelet[2496]: E0813 07:07:02.016674 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:02.017075 kubelet[2496]: E0813 07:07:02.016985 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:02.362944 kubelet[2496]: E0813 07:07:02.362704 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:03.019073 kubelet[2496]: E0813 07:07:03.018970 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:03.855037 kubelet[2496]: I0813 07:07:03.854823 2496 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:07:03.855776 containerd[1472]: time="2025-08-13T07:07:03.855582028Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:07:03.856989 kubelet[2496]: I0813 07:07:03.856793 2496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:07:04.828073 systemd[1]: Created slice kubepods-besteffort-pod42019248_9f31_4e40_9dc0_0526fe92e24f.slice - libcontainer container kubepods-besteffort-pod42019248_9f31_4e40_9dc0_0526fe92e24f.slice. Aug 13 07:07:04.894503 kubelet[2496]: I0813 07:07:04.894452 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42019248-9f31-4e40-9dc0-0526fe92e24f-kube-proxy\") pod \"kube-proxy-r97gz\" (UID: \"42019248-9f31-4e40-9dc0-0526fe92e24f\") " pod="kube-system/kube-proxy-r97gz" Aug 13 07:07:04.894503 kubelet[2496]: I0813 07:07:04.894503 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42019248-9f31-4e40-9dc0-0526fe92e24f-xtables-lock\") pod \"kube-proxy-r97gz\" (UID: \"42019248-9f31-4e40-9dc0-0526fe92e24f\") " pod="kube-system/kube-proxy-r97gz" Aug 13 07:07:04.894962 kubelet[2496]: I0813 07:07:04.894534 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42019248-9f31-4e40-9dc0-0526fe92e24f-lib-modules\") pod \"kube-proxy-r97gz\" (UID: \"42019248-9f31-4e40-9dc0-0526fe92e24f\") " pod="kube-system/kube-proxy-r97gz" Aug 13 07:07:04.894962 kubelet[2496]: I0813 07:07:04.894561 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5px\" (UniqueName: \"kubernetes.io/projected/42019248-9f31-4e40-9dc0-0526fe92e24f-kube-api-access-lb5px\") pod \"kube-proxy-r97gz\" (UID: \"42019248-9f31-4e40-9dc0-0526fe92e24f\") " pod="kube-system/kube-proxy-r97gz" Aug 13 07:07:04.984497 systemd[1]: Created slice kubepods-besteffort-podf27eba93_19c8_4cbe_a6ad_bd6c3b6a0bd7.slice - libcontainer container kubepods-besteffort-podf27eba93_19c8_4cbe_a6ad_bd6c3b6a0bd7.slice. Aug 13 07:07:04.999390 kubelet[2496]: I0813 07:07:04.999339 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7-var-lib-calico\") pod \"tigera-operator-747864d56d-4nz8n\" (UID: \"f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7\") " pod="tigera-operator/tigera-operator-747864d56d-4nz8n" Aug 13 07:07:04.999557 kubelet[2496]: I0813 07:07:04.999479 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghk8\" (UniqueName: \"kubernetes.io/projected/f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7-kube-api-access-hghk8\") pod \"tigera-operator-747864d56d-4nz8n\" (UID: \"f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7\") " pod="tigera-operator/tigera-operator-747864d56d-4nz8n" Aug 13 07:07:05.137034 kubelet[2496]: E0813 07:07:05.136831 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:05.138790 containerd[1472]: time="2025-08-13T07:07:05.137881244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r97gz,Uid:42019248-9f31-4e40-9dc0-0526fe92e24f,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:05.171692 containerd[1472]: time="2025-08-13T07:07:05.171533952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:05.171692 containerd[1472]: time="2025-08-13T07:07:05.171623514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:05.171692 containerd[1472]: time="2025-08-13T07:07:05.171659801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:05.173197 containerd[1472]: time="2025-08-13T07:07:05.172939339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:05.204450 systemd[1]: Started cri-containerd-030675ebe148c247b0dfa64d9ea77a25d48f3b0be0f144a5d14586210ac06ba3.scope - libcontainer container 030675ebe148c247b0dfa64d9ea77a25d48f3b0be0f144a5d14586210ac06ba3. Aug 13 07:07:05.254144 containerd[1472]: time="2025-08-13T07:07:05.254067636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r97gz,Uid:42019248-9f31-4e40-9dc0-0526fe92e24f,Namespace:kube-system,Attempt:0,} returns sandbox id \"030675ebe148c247b0dfa64d9ea77a25d48f3b0be0f144a5d14586210ac06ba3\"" Aug 13 07:07:05.256023 kubelet[2496]: E0813 07:07:05.255981 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:05.262839 containerd[1472]: time="2025-08-13T07:07:05.262554548Z" level=info msg="CreateContainer within sandbox \"030675ebe148c247b0dfa64d9ea77a25d48f3b0be0f144a5d14586210ac06ba3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:07:05.280631 containerd[1472]: time="2025-08-13T07:07:05.280529572Z" level=info msg="CreateContainer within sandbox \"030675ebe148c247b0dfa64d9ea77a25d48f3b0be0f144a5d14586210ac06ba3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f804236b691246828e75017a4258bc874a304cd0653132c3e01ff9226eaad10\"" Aug 13 07:07:05.283575 containerd[1472]: time="2025-08-13T07:07:05.281598585Z" level=info msg="StartContainer for \"4f804236b691246828e75017a4258bc874a304cd0653132c3e01ff9226eaad10\"" Aug 13 07:07:05.299265 containerd[1472]: time="2025-08-13T07:07:05.299106801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-4nz8n,Uid:f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:07:05.333528 systemd[1]: Started cri-containerd-4f804236b691246828e75017a4258bc874a304cd0653132c3e01ff9226eaad10.scope - libcontainer container 4f804236b691246828e75017a4258bc874a304cd0653132c3e01ff9226eaad10. Aug 13 07:07:05.347925 containerd[1472]: time="2025-08-13T07:07:05.347528582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:05.347925 containerd[1472]: time="2025-08-13T07:07:05.347636713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:05.347925 containerd[1472]: time="2025-08-13T07:07:05.347660448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:05.347925 containerd[1472]: time="2025-08-13T07:07:05.347843124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:05.393733 systemd[1]: Started cri-containerd-8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624.scope - libcontainer container 8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624. Aug 13 07:07:05.423495 containerd[1472]: time="2025-08-13T07:07:05.423431371Z" level=info msg="StartContainer for \"4f804236b691246828e75017a4258bc874a304cd0653132c3e01ff9226eaad10\" returns successfully" Aug 13 07:07:05.480494 containerd[1472]: time="2025-08-13T07:07:05.479626030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-4nz8n,Uid:f27eba93-19c8-4cbe-a6ad-bd6c3b6a0bd7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624\"" Aug 13 07:07:05.487611 containerd[1472]: time="2025-08-13T07:07:05.487214045Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:07:06.894773 systemd-resolved[1330]: Clock change detected. Flushing caches. Aug 13 07:07:06.895083 systemd-timesyncd[1345]: Contacted time server 108.61.73.243:123 (2.flatcar.pool.ntp.org). Aug 13 07:07:06.895186 systemd-timesyncd[1345]: Initial clock synchronization to Wed 2025-08-13 07:07:06.894679 UTC. Aug 13 07:07:07.134378 kubelet[2496]: E0813 07:07:07.134051 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:07.148567 kubelet[2496]: I0813 07:07:07.148379 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r97gz" podStartSLOduration=3.148359512 podStartE2EDuration="3.148359512s" podCreationTimestamp="2025-08-13 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:07.148278142 +0000 UTC m=+6.257307038" watchObservedRunningTime="2025-08-13 07:07:07.148359512 +0000 UTC m=+6.257388411" Aug 13 07:07:08.042690 kubelet[2496]: E0813 07:07:08.042576 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:08.058303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353552406.mount: Deactivated successfully. Aug 13 07:07:08.136666 kubelet[2496]: E0813 07:07:08.136619 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:08.592314 kubelet[2496]: E0813 07:07:08.591998 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:08.847200 containerd[1472]: time="2025-08-13T07:07:08.846754869Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:08.848167 containerd[1472]: time="2025-08-13T07:07:08.848100586Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:07:08.849116 containerd[1472]: time="2025-08-13T07:07:08.848788958Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:08.851759 containerd[1472]: time="2025-08-13T07:07:08.851676720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:08.852860 containerd[1472]: time="2025-08-13T07:07:08.852818900Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.26817564s" Aug 13 07:07:08.853124 containerd[1472]: time="2025-08-13T07:07:08.852996958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:07:08.863083 containerd[1472]: time="2025-08-13T07:07:08.862089947Z" level=info msg="CreateContainer within sandbox \"8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:07:08.884366 containerd[1472]: time="2025-08-13T07:07:08.884244643Z" level=info msg="CreateContainer within sandbox \"8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2\"" Aug 13 07:07:08.885955 containerd[1472]: time="2025-08-13T07:07:08.885649519Z" level=info msg="StartContainer for \"824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2\"" Aug 13 07:07:08.925766 systemd[1]: Started cri-containerd-824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2.scope - libcontainer container 824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2. Aug 13 07:07:08.976853 containerd[1472]: time="2025-08-13T07:07:08.976616586Z" level=info msg="StartContainer for \"824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2\" returns successfully" Aug 13 07:07:09.141322 kubelet[2496]: E0813 07:07:09.140763 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:09.141322 kubelet[2496]: E0813 07:07:09.141162 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:10.143081 kubelet[2496]: E0813 07:07:10.143046 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:12.408557 systemd[1]: cri-containerd-824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2.scope: Deactivated successfully. Aug 13 07:07:12.456460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2-rootfs.mount: Deactivated successfully. Aug 13 07:07:12.463911 containerd[1472]: time="2025-08-13T07:07:12.463751912Z" level=info msg="shim disconnected" id=824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2 namespace=k8s.io Aug 13 07:07:12.464982 containerd[1472]: time="2025-08-13T07:07:12.463961136Z" level=warning msg="cleaning up after shim disconnected" id=824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2 namespace=k8s.io Aug 13 07:07:12.464982 containerd[1472]: time="2025-08-13T07:07:12.463973221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:12.491544 containerd[1472]: time="2025-08-13T07:07:12.490628352Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:07:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:07:13.175562 kubelet[2496]: I0813 07:07:13.175022 2496 scope.go:117] "RemoveContainer" containerID="824c7a1534036b0f1d9db25ba50762bb604fe9c717ae95d90a3cc56a8d8f00a2" Aug 13 07:07:13.184040 containerd[1472]: time="2025-08-13T07:07:13.183987489Z" level=info msg="CreateContainer within sandbox \"8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 07:07:13.209556 containerd[1472]: time="2025-08-13T07:07:13.209425769Z" level=info msg="CreateContainer within sandbox \"8dd3cb93f883759ad0ef31977061650adabf36f4f2bacf50b0f57b4002f77624\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"809752ef09bc6450fe8d2b8d478336bdcd68dda5fb30ad60e1ef2aa9a18c8431\"" Aug 13 07:07:13.216722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696194084.mount: Deactivated successfully. Aug 13 07:07:13.220537 containerd[1472]: time="2025-08-13T07:07:13.217727382Z" level=info msg="StartContainer for \"809752ef09bc6450fe8d2b8d478336bdcd68dda5fb30ad60e1ef2aa9a18c8431\"" Aug 13 07:07:13.293260 systemd[1]: Started cri-containerd-809752ef09bc6450fe8d2b8d478336bdcd68dda5fb30ad60e1ef2aa9a18c8431.scope - libcontainer container 809752ef09bc6450fe8d2b8d478336bdcd68dda5fb30ad60e1ef2aa9a18c8431. Aug 13 07:07:13.373084 containerd[1472]: time="2025-08-13T07:07:13.372956898Z" level=info msg="StartContainer for \"809752ef09bc6450fe8d2b8d478336bdcd68dda5fb30ad60e1ef2aa9a18c8431\" returns successfully" Aug 13 07:07:13.472616 kubelet[2496]: E0813 07:07:13.472405 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:14.191983 kubelet[2496]: I0813 07:07:14.191826 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-4nz8n" podStartSLOduration=7.916293005 podStartE2EDuration="10.191806713s" podCreationTimestamp="2025-08-13 07:07:04 +0000 UTC" firstStartedPulling="2025-08-13 07:07:05.483087389 +0000 UTC m=+5.689503834" lastFinishedPulling="2025-08-13 07:07:08.855988652 +0000 UTC m=+7.965017542" observedRunningTime="2025-08-13 07:07:09.154884072 +0000 UTC m=+8.263912965" watchObservedRunningTime="2025-08-13 07:07:14.191806713 +0000 UTC m=+13.300835612" Aug 13 07:07:15.679772 update_engine[1448]: I20250813 07:07:15.679678 1448 update_attempter.cc:509] Updating boot flags... Aug 13 07:07:15.726901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2933) Aug 13 07:07:15.816704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2933) Aug 13 07:07:15.945731 sudo[1652]: pam_unix(sudo:session): session closed for user root Aug 13 07:07:15.951105 sshd[1649]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:15.956425 systemd[1]: sshd@6-64.23.220.168:22-139.178.89.65:37326.service: Deactivated successfully. Aug 13 07:07:15.958902 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:07:15.959247 systemd[1]: session-7.scope: Consumed 5.921s CPU time, 146.0M memory peak, 0B memory swap peak. Aug 13 07:07:15.960678 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:07:15.962806 systemd-logind[1447]: Removed session 7. Aug 13 07:07:21.775430 systemd[1]: Created slice kubepods-besteffort-pod4cfaaab6_6eff_45ed_89ea_a2da98505bb2.slice - libcontainer container kubepods-besteffort-pod4cfaaab6_6eff_45ed_89ea_a2da98505bb2.slice. Aug 13 07:07:21.807913 kubelet[2496]: I0813 07:07:21.807855 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4cfaaab6-6eff-45ed-89ea-a2da98505bb2-typha-certs\") pod \"calico-typha-86b957dc85-lmlvc\" (UID: \"4cfaaab6-6eff-45ed-89ea-a2da98505bb2\") " pod="calico-system/calico-typha-86b957dc85-lmlvc" Aug 13 07:07:21.807913 kubelet[2496]: I0813 07:07:21.807910 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfaaab6-6eff-45ed-89ea-a2da98505bb2-tigera-ca-bundle\") pod \"calico-typha-86b957dc85-lmlvc\" (UID: \"4cfaaab6-6eff-45ed-89ea-a2da98505bb2\") " pod="calico-system/calico-typha-86b957dc85-lmlvc" Aug 13 07:07:21.808829 kubelet[2496]: I0813 07:07:21.807938 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnh7v\" (UniqueName: \"kubernetes.io/projected/4cfaaab6-6eff-45ed-89ea-a2da98505bb2-kube-api-access-qnh7v\") pod \"calico-typha-86b957dc85-lmlvc\" (UID: \"4cfaaab6-6eff-45ed-89ea-a2da98505bb2\") " pod="calico-system/calico-typha-86b957dc85-lmlvc" Aug 13 07:07:22.067183 systemd[1]: Created slice kubepods-besteffort-pode6b71ad5_6ad4_4858_b72f_58c28eb07042.slice - libcontainer container kubepods-besteffort-pode6b71ad5_6ad4_4858_b72f_58c28eb07042.slice. Aug 13 07:07:22.085191 kubelet[2496]: E0813 07:07:22.085149 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:22.087132 containerd[1472]: time="2025-08-13T07:07:22.087085237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86b957dc85-lmlvc,Uid:4cfaaab6-6eff-45ed-89ea-a2da98505bb2,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:22.108864 kubelet[2496]: I0813 07:07:22.108565 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-cni-bin-dir\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.108864 kubelet[2496]: I0813 07:07:22.108612 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-policysync\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.108864 kubelet[2496]: I0813 07:07:22.108636 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-flexvol-driver-host\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.108864 kubelet[2496]: I0813 07:07:22.108660 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6b71ad5-6ad4-4858-b72f-58c28eb07042-tigera-ca-bundle\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.108864 kubelet[2496]: I0813 07:07:22.108684 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-var-lib-calico\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109291 kubelet[2496]: I0813 07:07:22.108700 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8fx2\" (UniqueName: \"kubernetes.io/projected/e6b71ad5-6ad4-4858-b72f-58c28eb07042-kube-api-access-p8fx2\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109291 kubelet[2496]: I0813 07:07:22.108718 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-xtables-lock\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109291 kubelet[2496]: I0813 07:07:22.108736 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-var-run-calico\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109291 kubelet[2496]: I0813 07:07:22.108755 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-cni-log-dir\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109291 kubelet[2496]: I0813 07:07:22.108769 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-cni-net-dir\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109485 kubelet[2496]: I0813 07:07:22.108789 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6b71ad5-6ad4-4858-b72f-58c28eb07042-lib-modules\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.109485 kubelet[2496]: I0813 07:07:22.108807 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e6b71ad5-6ad4-4858-b72f-58c28eb07042-node-certs\") pod \"calico-node-tqzmx\" (UID: \"e6b71ad5-6ad4-4858-b72f-58c28eb07042\") " pod="calico-system/calico-node-tqzmx" Aug 13 07:07:22.140996 containerd[1472]: time="2025-08-13T07:07:22.139727050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:22.140996 containerd[1472]: time="2025-08-13T07:07:22.139827676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:22.140996 containerd[1472]: time="2025-08-13T07:07:22.139882674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.140996 containerd[1472]: time="2025-08-13T07:07:22.140081441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.192851 systemd[1]: Started cri-containerd-d009b9573d7c7edecb0b48837934b3fcb0fe8404252d18b4357f3692cf8456e4.scope - libcontainer container d009b9573d7c7edecb0b48837934b3fcb0fe8404252d18b4357f3692cf8456e4. Aug 13 07:07:22.221574 kubelet[2496]: E0813 07:07:22.221495 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.222138 kubelet[2496]: W0813 07:07:22.221890 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.222402 kubelet[2496]: E0813 07:07:22.222383 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.223334 kubelet[2496]: W0813 07:07:22.222405 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.223334 kubelet[2496]: E0813 07:07:22.222945 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.223540 kubelet[2496]: E0813 07:07:22.223472 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.223540 kubelet[2496]: W0813 07:07:22.223489 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.223643 kubelet[2496]: E0813 07:07:22.223540 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.224615 kubelet[2496]: E0813 07:07:22.223700 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.224615 kubelet[2496]: E0813 07:07:22.224133 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.224615 kubelet[2496]: W0813 07:07:22.224145 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.224615 kubelet[2496]: E0813 07:07:22.224161 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.225072 kubelet[2496]: E0813 07:07:22.224686 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.225072 kubelet[2496]: W0813 07:07:22.224701 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.225072 kubelet[2496]: E0813 07:07:22.224760 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.225331 kubelet[2496]: E0813 07:07:22.225094 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.225331 kubelet[2496]: W0813 07:07:22.225107 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.225331 kubelet[2496]: E0813 07:07:22.225160 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.225585 kubelet[2496]: E0813 07:07:22.225567 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.225585 kubelet[2496]: W0813 07:07:22.225580 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.225755 kubelet[2496]: E0813 07:07:22.225634 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.226232 kubelet[2496]: E0813 07:07:22.226200 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.226232 kubelet[2496]: W0813 07:07:22.226214 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.226232 kubelet[2496]: E0813 07:07:22.226231 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.226780 kubelet[2496]: E0813 07:07:22.226454 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.226780 kubelet[2496]: W0813 07:07:22.226778 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.227433 kubelet[2496]: E0813 07:07:22.226807 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.227433 kubelet[2496]: E0813 07:07:22.227054 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.227433 kubelet[2496]: W0813 07:07:22.227063 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.227433 kubelet[2496]: E0813 07:07:22.227074 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.228490 kubelet[2496]: E0813 07:07:22.228467 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.228771 kubelet[2496]: W0813 07:07:22.228746 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.229096 kubelet[2496]: E0813 07:07:22.228844 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.229475 kubelet[2496]: E0813 07:07:22.229458 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.229617 kubelet[2496]: W0813 07:07:22.229599 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.229742 kubelet[2496]: E0813 07:07:22.229716 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.231481 kubelet[2496]: E0813 07:07:22.231390 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.231481 kubelet[2496]: W0813 07:07:22.231413 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.231481 kubelet[2496]: E0813 07:07:22.231434 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.249407 kubelet[2496]: E0813 07:07:22.249348 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.249841 kubelet[2496]: W0813 07:07:22.249378 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.249841 kubelet[2496]: E0813 07:07:22.249536 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.323565 kubelet[2496]: E0813 07:07:22.321622 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:22.336558 containerd[1472]: time="2025-08-13T07:07:22.335353014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86b957dc85-lmlvc,Uid:4cfaaab6-6eff-45ed-89ea-a2da98505bb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d009b9573d7c7edecb0b48837934b3fcb0fe8404252d18b4357f3692cf8456e4\"" Aug 13 07:07:22.345226 kubelet[2496]: E0813 07:07:22.345186 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:22.346892 containerd[1472]: time="2025-08-13T07:07:22.346830634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:07:22.377218 containerd[1472]: time="2025-08-13T07:07:22.376772769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tqzmx,Uid:e6b71ad5-6ad4-4858-b72f-58c28eb07042,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.394725 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.395569 kubelet[2496]: W0813 07:07:22.394751 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.394787 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.395068 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.395569 kubelet[2496]: W0813 07:07:22.395077 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.395087 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.395289 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.395569 kubelet[2496]: W0813 07:07:22.395297 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.395569 kubelet[2496]: E0813 07:07:22.395306 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.396354 kubelet[2496]: E0813 07:07:22.395815 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.396354 kubelet[2496]: W0813 07:07:22.395837 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.396354 kubelet[2496]: E0813 07:07:22.395852 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.396354 kubelet[2496]: E0813 07:07:22.396255 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.396354 kubelet[2496]: W0813 07:07:22.396265 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.396354 kubelet[2496]: E0813 07:07:22.396278 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.396633 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.397070 kubelet[2496]: W0813 07:07:22.396643 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.396691 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.397084 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.397070 kubelet[2496]: W0813 07:07:22.397094 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.397105 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.397388 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.397070 kubelet[2496]: W0813 07:07:22.397397 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.397070 kubelet[2496]: E0813 07:07:22.397406 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.397831 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.400984 kubelet[2496]: W0813 07:07:22.397840 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.397851 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.398106 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.400984 kubelet[2496]: W0813 07:07:22.398134 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.398144 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.398566 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.400984 kubelet[2496]: W0813 07:07:22.398576 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.398589 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.400984 kubelet[2496]: E0813 07:07:22.398875 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.401375 kubelet[2496]: W0813 07:07:22.398885 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.398894 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.399354 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.401375 kubelet[2496]: W0813 07:07:22.399364 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.399377 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.399762 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.401375 kubelet[2496]: W0813 07:07:22.399772 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.399783 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.401375 kubelet[2496]: E0813 07:07:22.400020 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.401375 kubelet[2496]: W0813 07:07:22.400029 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.400041 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.400422 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.403652 kubelet[2496]: W0813 07:07:22.400432 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.400442 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.400886 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.403652 kubelet[2496]: W0813 07:07:22.400896 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.400907 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.401151 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.403652 kubelet[2496]: W0813 07:07:22.401161 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.403652 kubelet[2496]: E0813 07:07:22.401170 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.404083 kubelet[2496]: E0813 07:07:22.401411 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.404083 kubelet[2496]: W0813 07:07:22.401419 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.404083 kubelet[2496]: E0813 07:07:22.401428 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.404083 kubelet[2496]: E0813 07:07:22.401744 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.404083 kubelet[2496]: W0813 07:07:22.401754 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.404083 kubelet[2496]: E0813 07:07:22.401764 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.413481 containerd[1472]: time="2025-08-13T07:07:22.412860585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:22.413481 containerd[1472]: time="2025-08-13T07:07:22.413084401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:22.413481 containerd[1472]: time="2025-08-13T07:07:22.413111120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.413481 containerd[1472]: time="2025-08-13T07:07:22.413222761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.413775 kubelet[2496]: E0813 07:07:22.413134 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.413775 kubelet[2496]: W0813 07:07:22.413159 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.413775 kubelet[2496]: E0813 07:07:22.413187 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.413775 kubelet[2496]: I0813 07:07:22.413226 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39ea9a35-bd67-4511-9bad-6e60fa944270-varrun\") pod \"csi-node-driver-k4bpr\" (UID: \"39ea9a35-bd67-4511-9bad-6e60fa944270\") " pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:22.415628 kubelet[2496]: E0813 07:07:22.415386 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.415628 kubelet[2496]: W0813 07:07:22.415420 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.415628 kubelet[2496]: E0813 07:07:22.415459 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.415628 kubelet[2496]: I0813 07:07:22.415500 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39ea9a35-bd67-4511-9bad-6e60fa944270-socket-dir\") pod \"csi-node-driver-k4bpr\" (UID: \"39ea9a35-bd67-4511-9bad-6e60fa944270\") " pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:22.415905 kubelet[2496]: E0813 07:07:22.415810 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.415905 kubelet[2496]: W0813 07:07:22.415828 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.415905 kubelet[2496]: E0813 07:07:22.415855 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.416554 kubelet[2496]: E0813 07:07:22.416032 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.416554 kubelet[2496]: W0813 07:07:22.416044 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.416554 kubelet[2496]: E0813 07:07:22.416054 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.416554 kubelet[2496]: E0813 07:07:22.416240 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.416554 kubelet[2496]: W0813 07:07:22.416254 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.416554 kubelet[2496]: E0813 07:07:22.416264 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.416554 kubelet[2496]: I0813 07:07:22.416290 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p64m\" (UniqueName: \"kubernetes.io/projected/39ea9a35-bd67-4511-9bad-6e60fa944270-kube-api-access-2p64m\") pod \"csi-node-driver-k4bpr\" (UID: \"39ea9a35-bd67-4511-9bad-6e60fa944270\") " pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:22.418709 kubelet[2496]: E0813 07:07:22.418675 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.418709 kubelet[2496]: W0813 07:07:22.418707 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.419365 kubelet[2496]: E0813 07:07:22.419041 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.419365 kubelet[2496]: I0813 07:07:22.419087 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39ea9a35-bd67-4511-9bad-6e60fa944270-kubelet-dir\") pod \"csi-node-driver-k4bpr\" (UID: \"39ea9a35-bd67-4511-9bad-6e60fa944270\") " pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:22.420923 kubelet[2496]: E0813 07:07:22.420890 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.420923 kubelet[2496]: W0813 07:07:22.420917 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.421689 kubelet[2496]: E0813 07:07:22.421653 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.421830 kubelet[2496]: E0813 07:07:22.421814 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.421830 kubelet[2496]: W0813 07:07:22.421828 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.423598 kubelet[2496]: E0813 07:07:22.423562 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.423952 kubelet[2496]: E0813 07:07:22.423928 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.423952 kubelet[2496]: W0813 07:07:22.423952 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.424592 kubelet[2496]: E0813 07:07:22.424558 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.424694 kubelet[2496]: I0813 07:07:22.424613 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39ea9a35-bd67-4511-9bad-6e60fa944270-registration-dir\") pod \"csi-node-driver-k4bpr\" (UID: \"39ea9a35-bd67-4511-9bad-6e60fa944270\") " pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:22.425121 kubelet[2496]: E0813 07:07:22.425095 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.425121 kubelet[2496]: W0813 07:07:22.425113 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.425315 kubelet[2496]: E0813 07:07:22.425239 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.426035 kubelet[2496]: E0813 07:07:22.426014 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.426035 kubelet[2496]: W0813 07:07:22.426029 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.426159 kubelet[2496]: E0813 07:07:22.426047 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.427592 kubelet[2496]: E0813 07:07:22.427381 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.427592 kubelet[2496]: W0813 07:07:22.427404 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.427592 kubelet[2496]: E0813 07:07:22.427428 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.428557 kubelet[2496]: E0813 07:07:22.428152 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.428557 kubelet[2496]: W0813 07:07:22.428168 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.428557 kubelet[2496]: E0813 07:07:22.428183 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.428725 kubelet[2496]: E0813 07:07:22.428707 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.428725 kubelet[2496]: W0813 07:07:22.428722 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.429123 kubelet[2496]: E0813 07:07:22.428907 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.429851 kubelet[2496]: E0813 07:07:22.429814 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.429851 kubelet[2496]: W0813 07:07:22.429829 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.429851 kubelet[2496]: E0813 07:07:22.429842 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.453725 systemd[1]: Started cri-containerd-0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180.scope - libcontainer container 0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180. Aug 13 07:07:22.528584 kubelet[2496]: E0813 07:07:22.528163 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.528584 kubelet[2496]: W0813 07:07:22.528202 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.528584 kubelet[2496]: E0813 07:07:22.528233 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.529495 kubelet[2496]: E0813 07:07:22.529176 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.529495 kubelet[2496]: W0813 07:07:22.529200 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.529495 kubelet[2496]: E0813 07:07:22.529231 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.529726 kubelet[2496]: E0813 07:07:22.529548 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.529726 kubelet[2496]: W0813 07:07:22.529565 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.529726 kubelet[2496]: E0813 07:07:22.529587 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.529850 kubelet[2496]: E0813 07:07:22.529811 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.529850 kubelet[2496]: W0813 07:07:22.529821 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.529850 kubelet[2496]: E0813 07:07:22.529831 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.530814 kubelet[2496]: E0813 07:07:22.529995 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.530814 kubelet[2496]: W0813 07:07:22.530007 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.530814 kubelet[2496]: E0813 07:07:22.530016 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.531553 kubelet[2496]: E0813 07:07:22.531038 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.531553 kubelet[2496]: W0813 07:07:22.531053 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.531553 kubelet[2496]: E0813 07:07:22.531137 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.531553 kubelet[2496]: E0813 07:07:22.531482 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.531553 kubelet[2496]: W0813 07:07:22.531501 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.532333 kubelet[2496]: E0813 07:07:22.532173 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.532747 kubelet[2496]: E0813 07:07:22.532383 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.532747 kubelet[2496]: W0813 07:07:22.532395 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.532747 kubelet[2496]: E0813 07:07:22.532417 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.533075 kubelet[2496]: E0813 07:07:22.532899 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.533075 kubelet[2496]: W0813 07:07:22.532931 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.533075 kubelet[2496]: E0813 07:07:22.532959 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.534530 kubelet[2496]: E0813 07:07:22.533881 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.534530 kubelet[2496]: W0813 07:07:22.533900 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.534530 kubelet[2496]: E0813 07:07:22.533928 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.534530 kubelet[2496]: E0813 07:07:22.534247 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.534530 kubelet[2496]: W0813 07:07:22.534260 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.534775 kubelet[2496]: E0813 07:07:22.534611 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.536048 kubelet[2496]: E0813 07:07:22.535927 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.536048 kubelet[2496]: W0813 07:07:22.535947 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.537814 kubelet[2496]: E0813 07:07:22.536671 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.537814 kubelet[2496]: W0813 07:07:22.536687 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.537814 kubelet[2496]: E0813 07:07:22.536707 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.537814 kubelet[2496]: E0813 07:07:22.537706 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.538479 kubelet[2496]: E0813 07:07:22.538457 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.538479 kubelet[2496]: W0813 07:07:22.538474 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.538479 kubelet[2496]: E0813 07:07:22.538493 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.540848 kubelet[2496]: E0813 07:07:22.540801 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.540848 kubelet[2496]: W0813 07:07:22.540822 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.541030 kubelet[2496]: E0813 07:07:22.540874 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.543616 kubelet[2496]: E0813 07:07:22.543582 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.543616 kubelet[2496]: W0813 07:07:22.543607 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.543616 kubelet[2496]: E0813 07:07:22.543659 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.544122 containerd[1472]: time="2025-08-13T07:07:22.543709129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tqzmx,Uid:e6b71ad5-6ad4-4858-b72f-58c28eb07042,Namespace:calico-system,Attempt:0,} returns sandbox id \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\"" Aug 13 07:07:22.545684 kubelet[2496]: E0813 07:07:22.545228 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.545684 kubelet[2496]: W0813 07:07:22.545257 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.545684 kubelet[2496]: E0813 07:07:22.545578 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.547237 kubelet[2496]: E0813 07:07:22.547198 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.547237 kubelet[2496]: W0813 07:07:22.547230 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.547391 kubelet[2496]: E0813 07:07:22.547252 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.547779 kubelet[2496]: E0813 07:07:22.547754 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.547779 kubelet[2496]: W0813 07:07:22.547770 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.547896 kubelet[2496]: E0813 07:07:22.547785 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.549777 kubelet[2496]: E0813 07:07:22.549751 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.549777 kubelet[2496]: W0813 07:07:22.549769 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.550064 kubelet[2496]: E0813 07:07:22.549898 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.550341 kubelet[2496]: E0813 07:07:22.550311 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.550341 kubelet[2496]: W0813 07:07:22.550329 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.550606 kubelet[2496]: E0813 07:07:22.550349 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.550675 kubelet[2496]: E0813 07:07:22.550646 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.550675 kubelet[2496]: W0813 07:07:22.550661 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.550747 kubelet[2496]: E0813 07:07:22.550681 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.552005 kubelet[2496]: E0813 07:07:22.551977 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.552005 kubelet[2496]: W0813 07:07:22.552000 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.552135 kubelet[2496]: E0813 07:07:22.552039 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.552419 kubelet[2496]: E0813 07:07:22.552400 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.552419 kubelet[2496]: W0813 07:07:22.552415 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.552539 kubelet[2496]: E0813 07:07:22.552498 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.553069 kubelet[2496]: E0813 07:07:22.552936 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.553069 kubelet[2496]: W0813 07:07:22.552950 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.553069 kubelet[2496]: E0813 07:07:22.552964 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.566750 kubelet[2496]: E0813 07:07:22.566624 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.566750 kubelet[2496]: W0813 07:07:22.566654 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.566750 kubelet[2496]: E0813 07:07:22.566683 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:23.661089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4210065390.mount: Deactivated successfully. Aug 13 07:07:24.068070 kubelet[2496]: E0813 07:07:24.067221 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:24.507653 containerd[1472]: time="2025-08-13T07:07:24.507367465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.508855 containerd[1472]: time="2025-08-13T07:07:24.508793984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:07:24.509592 containerd[1472]: time="2025-08-13T07:07:24.509500504Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.511556 containerd[1472]: time="2025-08-13T07:07:24.511457336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.512202 containerd[1472]: time="2025-08-13T07:07:24.512169921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.165276909s" Aug 13 07:07:24.512300 containerd[1472]: time="2025-08-13T07:07:24.512205653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:07:24.521543 containerd[1472]: time="2025-08-13T07:07:24.521262469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:07:24.560378 containerd[1472]: time="2025-08-13T07:07:24.560328261Z" level=info msg="CreateContainer within sandbox \"d009b9573d7c7edecb0b48837934b3fcb0fe8404252d18b4357f3692cf8456e4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:07:24.592931 containerd[1472]: time="2025-08-13T07:07:24.591626333Z" level=info msg="CreateContainer within sandbox \"d009b9573d7c7edecb0b48837934b3fcb0fe8404252d18b4357f3692cf8456e4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"249287b1ebc5f8f7139d5cda0e3a945762d59209566f4a9aba98a0ff13494dce\"" Aug 13 07:07:24.592399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246072016.mount: Deactivated successfully. Aug 13 07:07:24.599012 containerd[1472]: time="2025-08-13T07:07:24.598080289Z" level=info msg="StartContainer for \"249287b1ebc5f8f7139d5cda0e3a945762d59209566f4a9aba98a0ff13494dce\"" Aug 13 07:07:24.660778 systemd[1]: Started cri-containerd-249287b1ebc5f8f7139d5cda0e3a945762d59209566f4a9aba98a0ff13494dce.scope - libcontainer container 249287b1ebc5f8f7139d5cda0e3a945762d59209566f4a9aba98a0ff13494dce. Aug 13 07:07:24.722069 containerd[1472]: time="2025-08-13T07:07:24.722012038Z" level=info msg="StartContainer for \"249287b1ebc5f8f7139d5cda0e3a945762d59209566f4a9aba98a0ff13494dce\" returns successfully" Aug 13 07:07:25.214987 kubelet[2496]: E0813 07:07:25.214928 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:25.232073 kubelet[2496]: E0813 07:07:25.231912 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.232073 kubelet[2496]: W0813 07:07:25.231943 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.232073 kubelet[2496]: E0813 07:07:25.231969 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.232707 kubelet[2496]: E0813 07:07:25.232529 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.232707 kubelet[2496]: W0813 07:07:25.232572 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.232707 kubelet[2496]: E0813 07:07:25.232598 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.233241 kubelet[2496]: E0813 07:07:25.233064 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.233241 kubelet[2496]: W0813 07:07:25.233079 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.233241 kubelet[2496]: E0813 07:07:25.233093 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.233807 kubelet[2496]: E0813 07:07:25.233621 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.233807 kubelet[2496]: W0813 07:07:25.233635 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.233807 kubelet[2496]: E0813 07:07:25.233649 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.234787 kubelet[2496]: E0813 07:07:25.234591 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.234787 kubelet[2496]: W0813 07:07:25.234608 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.234787 kubelet[2496]: E0813 07:07:25.234621 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.235285 kubelet[2496]: E0813 07:07:25.234973 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.235285 kubelet[2496]: W0813 07:07:25.234986 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.235285 kubelet[2496]: E0813 07:07:25.235010 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.236390 kubelet[2496]: E0813 07:07:25.236141 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.236390 kubelet[2496]: W0813 07:07:25.236163 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.236390 kubelet[2496]: E0813 07:07:25.236192 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.237531 kubelet[2496]: E0813 07:07:25.236685 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.237645 kubelet[2496]: W0813 07:07:25.237627 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.237715 kubelet[2496]: E0813 07:07:25.237703 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.238039 kubelet[2496]: E0813 07:07:25.238026 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.238239 kubelet[2496]: W0813 07:07:25.238140 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.238239 kubelet[2496]: E0813 07:07:25.238159 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.238674 kubelet[2496]: E0813 07:07:25.238468 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.238674 kubelet[2496]: W0813 07:07:25.238480 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.238674 kubelet[2496]: E0813 07:07:25.238492 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.239014 kubelet[2496]: E0813 07:07:25.239001 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.239165 kubelet[2496]: W0813 07:07:25.239073 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.239165 kubelet[2496]: E0813 07:07:25.239088 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.239523 kubelet[2496]: E0813 07:07:25.239377 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.239523 kubelet[2496]: W0813 07:07:25.239389 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.239523 kubelet[2496]: E0813 07:07:25.239400 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.240561 kubelet[2496]: E0813 07:07:25.239770 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.240561 kubelet[2496]: W0813 07:07:25.239895 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.240561 kubelet[2496]: E0813 07:07:25.239907 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.240942 kubelet[2496]: E0813 07:07:25.240833 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.240942 kubelet[2496]: W0813 07:07:25.240846 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.240942 kubelet[2496]: E0813 07:07:25.240860 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.241203 kubelet[2496]: E0813 07:07:25.241117 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.241203 kubelet[2496]: W0813 07:07:25.241128 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.241203 kubelet[2496]: E0813 07:07:25.241138 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.255407 kubelet[2496]: I0813 07:07:25.255340 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86b957dc85-lmlvc" podStartSLOduration=2.0807807289999998 podStartE2EDuration="4.255321491s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:22.346121434 +0000 UTC m=+21.455150322" lastFinishedPulling="2025-08-13 07:07:24.520662191 +0000 UTC m=+23.629691084" observedRunningTime="2025-08-13 07:07:25.255191659 +0000 UTC m=+24.364220557" watchObservedRunningTime="2025-08-13 07:07:25.255321491 +0000 UTC m=+24.364350387" Aug 13 07:07:25.268669 kubelet[2496]: E0813 07:07:25.266883 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.268669 kubelet[2496]: W0813 07:07:25.266913 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.268669 kubelet[2496]: E0813 07:07:25.266948 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.269413 kubelet[2496]: E0813 07:07:25.269127 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.269413 kubelet[2496]: W0813 07:07:25.269161 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.269413 kubelet[2496]: E0813 07:07:25.269198 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.269881 kubelet[2496]: E0813 07:07:25.269765 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.269881 kubelet[2496]: W0813 07:07:25.269780 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.269881 kubelet[2496]: E0813 07:07:25.269818 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.270311 kubelet[2496]: E0813 07:07:25.270197 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.270311 kubelet[2496]: W0813 07:07:25.270209 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.270311 kubelet[2496]: E0813 07:07:25.270226 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.270497 kubelet[2496]: E0813 07:07:25.270487 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.270820 kubelet[2496]: W0813 07:07:25.270562 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.270820 kubelet[2496]: E0813 07:07:25.270583 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.271820 kubelet[2496]: E0813 07:07:25.271803 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.271983 kubelet[2496]: W0813 07:07:25.271887 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.271983 kubelet[2496]: E0813 07:07:25.271941 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.272282 kubelet[2496]: E0813 07:07:25.272195 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.272282 kubelet[2496]: W0813 07:07:25.272207 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.273176 kubelet[2496]: E0813 07:07:25.272369 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.273563 kubelet[2496]: E0813 07:07:25.273425 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.273563 kubelet[2496]: W0813 07:07:25.273441 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.273563 kubelet[2496]: E0813 07:07:25.273486 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.273859 kubelet[2496]: E0813 07:07:25.273801 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.273859 kubelet[2496]: W0813 07:07:25.273814 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.273859 kubelet[2496]: E0813 07:07:25.273846 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.275085 kubelet[2496]: E0813 07:07:25.274963 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.275085 kubelet[2496]: W0813 07:07:25.274980 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.275085 kubelet[2496]: E0813 07:07:25.275019 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.275502 kubelet[2496]: E0813 07:07:25.275373 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.275502 kubelet[2496]: W0813 07:07:25.275386 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.275502 kubelet[2496]: E0813 07:07:25.275410 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.275830 kubelet[2496]: E0813 07:07:25.275773 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.275830 kubelet[2496]: W0813 07:07:25.275785 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.275830 kubelet[2496]: E0813 07:07:25.275802 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.276070 kubelet[2496]: E0813 07:07:25.276050 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.276133 kubelet[2496]: W0813 07:07:25.276070 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.276133 kubelet[2496]: E0813 07:07:25.276092 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.276421 kubelet[2496]: E0813 07:07:25.276357 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.276421 kubelet[2496]: W0813 07:07:25.276370 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.276421 kubelet[2496]: E0813 07:07:25.276389 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.276704 kubelet[2496]: E0813 07:07:25.276684 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.276704 kubelet[2496]: W0813 07:07:25.276700 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.276787 kubelet[2496]: E0813 07:07:25.276719 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.277588 kubelet[2496]: E0813 07:07:25.277044 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.277588 kubelet[2496]: W0813 07:07:25.277059 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.277588 kubelet[2496]: E0813 07:07:25.277075 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.279710 kubelet[2496]: E0813 07:07:25.279686 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.279842 kubelet[2496]: W0813 07:07:25.279827 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.279966 kubelet[2496]: E0813 07:07:25.279951 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.280232 kubelet[2496]: E0813 07:07:25.280208 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.280303 kubelet[2496]: W0813 07:07:25.280263 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.280303 kubelet[2496]: E0813 07:07:25.280284 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.918036 containerd[1472]: time="2025-08-13T07:07:25.917947020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:25.919559 containerd[1472]: time="2025-08-13T07:07:25.919172973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:07:25.920389 containerd[1472]: time="2025-08-13T07:07:25.920349952Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:25.922857 containerd[1472]: time="2025-08-13T07:07:25.922815494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:25.923582 containerd[1472]: time="2025-08-13T07:07:25.923549262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.402238902s" Aug 13 07:07:25.923694 containerd[1472]: time="2025-08-13T07:07:25.923679508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:07:25.926657 containerd[1472]: time="2025-08-13T07:07:25.926623369Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:07:25.944700 containerd[1472]: time="2025-08-13T07:07:25.944646286Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b\"" Aug 13 07:07:25.946669 containerd[1472]: time="2025-08-13T07:07:25.945827681Z" level=info msg="StartContainer for \"f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b\"" Aug 13 07:07:25.994876 systemd[1]: Started cri-containerd-f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b.scope - libcontainer container f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b. Aug 13 07:07:26.035814 containerd[1472]: time="2025-08-13T07:07:26.035758643Z" level=info msg="StartContainer for \"f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b\" returns successfully" Aug 13 07:07:26.056726 systemd[1]: cri-containerd-f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b.scope: Deactivated successfully. Aug 13 07:07:26.070570 kubelet[2496]: E0813 07:07:26.066757 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:26.099083 containerd[1472]: time="2025-08-13T07:07:26.098988157Z" level=info msg="shim disconnected" id=f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b namespace=k8s.io Aug 13 07:07:26.100403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b-rootfs.mount: Deactivated successfully. Aug 13 07:07:26.100968 containerd[1472]: time="2025-08-13T07:07:26.100460359Z" level=warning msg="cleaning up after shim disconnected" id=f18dfb88bcbd26315097d66d1149e163b368cc590130ad30dfa8e687079cf14b namespace=k8s.io Aug 13 07:07:26.100968 containerd[1472]: time="2025-08-13T07:07:26.100519656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:26.218245 kubelet[2496]: I0813 07:07:26.218002 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:07:26.218894 kubelet[2496]: E0813 07:07:26.218339 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:26.222208 containerd[1472]: time="2025-08-13T07:07:26.221712930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:07:28.067500 kubelet[2496]: E0813 07:07:28.067292 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:30.067389 kubelet[2496]: E0813 07:07:30.067138 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:31.032620 containerd[1472]: time="2025-08-13T07:07:31.032142884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:31.034073 containerd[1472]: time="2025-08-13T07:07:31.033941441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:07:31.035356 containerd[1472]: time="2025-08-13T07:07:31.035231555Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:31.038780 containerd[1472]: time="2025-08-13T07:07:31.038675867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:31.041553 containerd[1472]: time="2025-08-13T07:07:31.040241712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.818459147s" Aug 13 07:07:31.041963 containerd[1472]: time="2025-08-13T07:07:31.041820859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:07:31.050772 containerd[1472]: time="2025-08-13T07:07:31.049888713Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:07:31.072749 containerd[1472]: time="2025-08-13T07:07:31.072689759Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7\"" Aug 13 07:07:31.079451 containerd[1472]: time="2025-08-13T07:07:31.079366214Z" level=info msg="StartContainer for \"5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7\"" Aug 13 07:07:31.173020 systemd[1]: Started cri-containerd-5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7.scope - libcontainer container 5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7. Aug 13 07:07:31.216341 containerd[1472]: time="2025-08-13T07:07:31.216212398Z" level=info msg="StartContainer for \"5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7\" returns successfully" Aug 13 07:07:31.969407 systemd[1]: cri-containerd-5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7.scope: Deactivated successfully. Aug 13 07:07:32.031332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7-rootfs.mount: Deactivated successfully. Aug 13 07:07:32.035930 containerd[1472]: time="2025-08-13T07:07:32.034175375Z" level=info msg="shim disconnected" id=5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7 namespace=k8s.io Aug 13 07:07:32.035930 containerd[1472]: time="2025-08-13T07:07:32.034258470Z" level=warning msg="cleaning up after shim disconnected" id=5e41f3714db911373a66abd4c9dd9567d48969e7d66534ad3cbd50ce7d6a22f7 namespace=k8s.io Aug 13 07:07:32.035930 containerd[1472]: time="2025-08-13T07:07:32.034270036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:32.052811 kubelet[2496]: I0813 07:07:32.052780 2496 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:07:32.062668 containerd[1472]: time="2025-08-13T07:07:32.062539970Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:07:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:07:32.083727 systemd[1]: Created slice kubepods-besteffort-pod39ea9a35_bd67_4511_9bad_6e60fa944270.slice - libcontainer container kubepods-besteffort-pod39ea9a35_bd67_4511_9bad_6e60fa944270.slice. Aug 13 07:07:32.101867 containerd[1472]: time="2025-08-13T07:07:32.101804784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4bpr,Uid:39ea9a35-bd67-4511-9bad-6e60fa944270,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.145820 systemd[1]: Created slice kubepods-besteffort-pod8ea21fb6_01dc_4e57_a88f_d08fd9931ec8.slice - libcontainer container kubepods-besteffort-pod8ea21fb6_01dc_4e57_a88f_d08fd9931ec8.slice. Aug 13 07:07:32.184544 kubelet[2496]: W0813 07:07:32.183815 2496 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-4-06119f59db" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-4-06119f59db' and this object Aug 13 07:07:32.193049 systemd[1]: Created slice kubepods-besteffort-pod2150c293_bab3_40c4_95af_02ef0839fd92.slice - libcontainer container kubepods-besteffort-pod2150c293_bab3_40c4_95af_02ef0839fd92.slice. Aug 13 07:07:32.195927 kubelet[2496]: E0813 07:07:32.193383 2496 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-4-06119f59db\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-4-06119f59db' and this object" logger="UnhandledError" Aug 13 07:07:32.221128 systemd[1]: Created slice kubepods-burstable-pod553898de_3f11_4cc6_b330_03fffa4336bb.slice - libcontainer container kubepods-burstable-pod553898de_3f11_4cc6_b330_03fffa4336bb.slice. Aug 13 07:07:32.228399 kubelet[2496]: I0813 07:07:32.228120 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m89g\" (UniqueName: \"kubernetes.io/projected/1e92778b-e18e-498e-93e6-ad7ba8ca5d17-kube-api-access-9m89g\") pod \"coredns-668d6bf9bc-p7m85\" (UID: \"1e92778b-e18e-498e-93e6-ad7ba8ca5d17\") " pod="kube-system/coredns-668d6bf9bc-p7m85" Aug 13 07:07:32.231743 kubelet[2496]: I0813 07:07:32.231650 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjmp\" (UniqueName: \"kubernetes.io/projected/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-kube-api-access-6xjmp\") pod \"whisker-75cb67d974-m5zrf\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " pod="calico-system/whisker-75cb67d974-m5zrf" Aug 13 07:07:32.232209 kubelet[2496]: I0813 07:07:32.232165 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2150c293-bab3-40c4-95af-02ef0839fd92-tigera-ca-bundle\") pod \"calico-kube-controllers-6fb8c445f9-bjpjh\" (UID: \"2150c293-bab3-40c4-95af-02ef0839fd92\") " pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" Aug 13 07:07:32.235021 kubelet[2496]: I0813 07:07:32.233892 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b305560b-20ae-4df3-8a79-002d16b6c79f-calico-apiserver-certs\") pod \"calico-apiserver-785ff45d87-mqjrr\" (UID: \"b305560b-20ae-4df3-8a79-002d16b6c79f\") " pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" Aug 13 07:07:32.237604 kubelet[2496]: I0813 07:07:32.235930 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8831262f-9738-4f38-9a9d-147ae2cd3257-goldmane-key-pair\") pod \"goldmane-768f4c5c69-j2p8w\" (UID: \"8831262f-9738-4f38-9a9d-147ae2cd3257\") " pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:32.237604 kubelet[2496]: I0813 07:07:32.235979 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5v5\" (UniqueName: \"kubernetes.io/projected/8831262f-9738-4f38-9a9d-147ae2cd3257-kube-api-access-qz5v5\") pod \"goldmane-768f4c5c69-j2p8w\" (UID: \"8831262f-9738-4f38-9a9d-147ae2cd3257\") " pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:32.237604 kubelet[2496]: I0813 07:07:32.236027 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2wb\" (UniqueName: \"kubernetes.io/projected/b305560b-20ae-4df3-8a79-002d16b6c79f-kube-api-access-xp2wb\") pod \"calico-apiserver-785ff45d87-mqjrr\" (UID: \"b305560b-20ae-4df3-8a79-002d16b6c79f\") " pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" Aug 13 07:07:32.237604 kubelet[2496]: I0813 07:07:32.236053 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbb7\" (UniqueName: \"kubernetes.io/projected/2150c293-bab3-40c4-95af-02ef0839fd92-kube-api-access-7lbb7\") pod \"calico-kube-controllers-6fb8c445f9-bjpjh\" (UID: \"2150c293-bab3-40c4-95af-02ef0839fd92\") " pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" Aug 13 07:07:32.237604 kubelet[2496]: I0813 07:07:32.236082 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8831262f-9738-4f38-9a9d-147ae2cd3257-config\") pod \"goldmane-768f4c5c69-j2p8w\" (UID: \"8831262f-9738-4f38-9a9d-147ae2cd3257\") " pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:32.238798 kubelet[2496]: I0813 07:07:32.236107 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-backend-key-pair\") pod \"whisker-75cb67d974-m5zrf\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " pod="calico-system/whisker-75cb67d974-m5zrf" Aug 13 07:07:32.238798 kubelet[2496]: I0813 07:07:32.236133 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-ca-bundle\") pod \"whisker-75cb67d974-m5zrf\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " pod="calico-system/whisker-75cb67d974-m5zrf" Aug 13 07:07:32.238798 kubelet[2496]: I0813 07:07:32.236172 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0227ea35-1919-478f-af60-278b21232cf4-calico-apiserver-certs\") pod \"calico-apiserver-785ff45d87-mccbf\" (UID: \"0227ea35-1919-478f-af60-278b21232cf4\") " pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" Aug 13 07:07:32.238798 kubelet[2496]: I0813 07:07:32.236197 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blk82\" (UniqueName: \"kubernetes.io/projected/0227ea35-1919-478f-af60-278b21232cf4-kube-api-access-blk82\") pod \"calico-apiserver-785ff45d87-mccbf\" (UID: \"0227ea35-1919-478f-af60-278b21232cf4\") " pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" Aug 13 07:07:32.238798 kubelet[2496]: I0813 07:07:32.236221 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8831262f-9738-4f38-9a9d-147ae2cd3257-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-j2p8w\" (UID: \"8831262f-9738-4f38-9a9d-147ae2cd3257\") " pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:32.238960 kubelet[2496]: I0813 07:07:32.236253 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmg6\" (UniqueName: \"kubernetes.io/projected/553898de-3f11-4cc6-b330-03fffa4336bb-kube-api-access-lrmg6\") pod \"coredns-668d6bf9bc-tf4rp\" (UID: \"553898de-3f11-4cc6-b330-03fffa4336bb\") " pod="kube-system/coredns-668d6bf9bc-tf4rp" Aug 13 07:07:32.238960 kubelet[2496]: I0813 07:07:32.236285 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/553898de-3f11-4cc6-b330-03fffa4336bb-config-volume\") pod \"coredns-668d6bf9bc-tf4rp\" (UID: \"553898de-3f11-4cc6-b330-03fffa4336bb\") " pod="kube-system/coredns-668d6bf9bc-tf4rp" Aug 13 07:07:32.238960 kubelet[2496]: I0813 07:07:32.236321 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e92778b-e18e-498e-93e6-ad7ba8ca5d17-config-volume\") pod \"coredns-668d6bf9bc-p7m85\" (UID: \"1e92778b-e18e-498e-93e6-ad7ba8ca5d17\") " pod="kube-system/coredns-668d6bf9bc-p7m85" Aug 13 07:07:32.249916 systemd[1]: Created slice kubepods-besteffort-podb305560b_20ae_4df3_8a79_002d16b6c79f.slice - libcontainer container kubepods-besteffort-podb305560b_20ae_4df3_8a79_002d16b6c79f.slice. Aug 13 07:07:32.272604 containerd[1472]: time="2025-08-13T07:07:32.271925417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:07:32.274142 systemd[1]: Created slice kubepods-burstable-pod1e92778b_e18e_498e_93e6_ad7ba8ca5d17.slice - libcontainer container kubepods-burstable-pod1e92778b_e18e_498e_93e6_ad7ba8ca5d17.slice. Aug 13 07:07:32.292218 systemd[1]: Created slice kubepods-besteffort-pod0227ea35_1919_478f_af60_278b21232cf4.slice - libcontainer container kubepods-besteffort-pod0227ea35_1919_478f_af60_278b21232cf4.slice. Aug 13 07:07:32.311474 systemd[1]: Created slice kubepods-besteffort-pod8831262f_9738_4f38_9a9d_147ae2cd3257.slice - libcontainer container kubepods-besteffort-pod8831262f_9738_4f38_9a9d_147ae2cd3257.slice. Aug 13 07:07:32.459585 containerd[1472]: time="2025-08-13T07:07:32.459415713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cb67d974-m5zrf,Uid:8ea21fb6-01dc-4e57-a88f-d08fd9931ec8,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.513455 containerd[1472]: time="2025-08-13T07:07:32.511986547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb8c445f9-bjpjh,Uid:2150c293-bab3-40c4-95af-02ef0839fd92,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.543122 kubelet[2496]: E0813 07:07:32.542695 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:32.557014 containerd[1472]: time="2025-08-13T07:07:32.556016121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tf4rp,Uid:553898de-3f11-4cc6-b330-03fffa4336bb,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:32.587135 kubelet[2496]: E0813 07:07:32.586543 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:32.589964 containerd[1472]: time="2025-08-13T07:07:32.589666080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7m85,Uid:1e92778b-e18e-498e-93e6-ad7ba8ca5d17,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:32.624567 containerd[1472]: time="2025-08-13T07:07:32.624106007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j2p8w,Uid:8831262f-9738-4f38-9a9d-147ae2cd3257,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.955543 containerd[1472]: time="2025-08-13T07:07:32.954445113Z" level=error msg="Failed to destroy network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.966596 containerd[1472]: time="2025-08-13T07:07:32.964759180Z" level=error msg="encountered an error cleaning up failed sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.966596 containerd[1472]: time="2025-08-13T07:07:32.964882785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb8c445f9-bjpjh,Uid:2150c293-bab3-40c4-95af-02ef0839fd92,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.967845 kubelet[2496]: E0813 07:07:32.965566 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.967845 kubelet[2496]: E0813 07:07:32.965654 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" Aug 13 07:07:32.967845 kubelet[2496]: E0813 07:07:32.965689 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" Aug 13 07:07:32.968024 kubelet[2496]: E0813 07:07:32.965747 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb8c445f9-bjpjh_calico-system(2150c293-bab3-40c4-95af-02ef0839fd92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb8c445f9-bjpjh_calico-system(2150c293-bab3-40c4-95af-02ef0839fd92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" podUID="2150c293-bab3-40c4-95af-02ef0839fd92" Aug 13 07:07:32.975891 containerd[1472]: time="2025-08-13T07:07:32.975826967Z" level=error msg="Failed to destroy network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.976820 containerd[1472]: time="2025-08-13T07:07:32.976766912Z" level=error msg="encountered an error cleaning up failed sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.978349 containerd[1472]: time="2025-08-13T07:07:32.978289110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4bpr,Uid:39ea9a35-bd67-4511-9bad-6e60fa944270,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.979754 kubelet[2496]: E0813 07:07:32.979138 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.979754 kubelet[2496]: E0813 07:07:32.979247 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:32.979754 kubelet[2496]: E0813 07:07:32.979299 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4bpr" Aug 13 07:07:32.980075 kubelet[2496]: E0813 07:07:32.979367 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k4bpr_calico-system(39ea9a35-bd67-4511-9bad-6e60fa944270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k4bpr_calico-system(39ea9a35-bd67-4511-9bad-6e60fa944270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:32.980746 containerd[1472]: time="2025-08-13T07:07:32.980639239Z" level=error msg="Failed to destroy network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.983725 containerd[1472]: time="2025-08-13T07:07:32.983664600Z" level=error msg="encountered an error cleaning up failed sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.984149 containerd[1472]: time="2025-08-13T07:07:32.984059409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cb67d974-m5zrf,Uid:8ea21fb6-01dc-4e57-a88f-d08fd9931ec8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.985881 kubelet[2496]: E0813 07:07:32.985817 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.986072 kubelet[2496]: E0813 07:07:32.985913 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75cb67d974-m5zrf" Aug 13 07:07:32.986072 kubelet[2496]: E0813 07:07:32.985948 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75cb67d974-m5zrf" Aug 13 07:07:32.986072 kubelet[2496]: E0813 07:07:32.986013 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75cb67d974-m5zrf_calico-system(8ea21fb6-01dc-4e57-a88f-d08fd9931ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75cb67d974-m5zrf_calico-system(8ea21fb6-01dc-4e57-a88f-d08fd9931ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75cb67d974-m5zrf" podUID="8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" Aug 13 07:07:32.994665 containerd[1472]: time="2025-08-13T07:07:32.994541116Z" level=error msg="Failed to destroy network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.996290 containerd[1472]: time="2025-08-13T07:07:32.996116153Z" level=error msg="encountered an error cleaning up failed sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.997007 containerd[1472]: time="2025-08-13T07:07:32.996730279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tf4rp,Uid:553898de-3f11-4cc6-b330-03fffa4336bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.997560 kubelet[2496]: E0813 07:07:32.997426 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.997709 kubelet[2496]: E0813 07:07:32.997677 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tf4rp" Aug 13 07:07:32.997978 kubelet[2496]: E0813 07:07:32.997825 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tf4rp" Aug 13 07:07:32.997978 kubelet[2496]: E0813 07:07:32.997919 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tf4rp_kube-system(553898de-3f11-4cc6-b330-03fffa4336bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tf4rp_kube-system(553898de-3f11-4cc6-b330-03fffa4336bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tf4rp" podUID="553898de-3f11-4cc6-b330-03fffa4336bb" Aug 13 07:07:33.011393 containerd[1472]: time="2025-08-13T07:07:33.011203960Z" level=error msg="Failed to destroy network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.012091 containerd[1472]: time="2025-08-13T07:07:33.011982005Z" level=error msg="encountered an error cleaning up failed sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.012283 containerd[1472]: time="2025-08-13T07:07:33.012213393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j2p8w,Uid:8831262f-9738-4f38-9a9d-147ae2cd3257,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.013863 kubelet[2496]: E0813 07:07:33.013768 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.014115 kubelet[2496]: E0813 07:07:33.013888 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:33.014115 kubelet[2496]: E0813 07:07:33.013916 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j2p8w" Aug 13 07:07:33.014115 kubelet[2496]: E0813 07:07:33.013980 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-j2p8w_calico-system(8831262f-9738-4f38-9a9d-147ae2cd3257)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-j2p8w_calico-system(8831262f-9738-4f38-9a9d-147ae2cd3257)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-j2p8w" podUID="8831262f-9738-4f38-9a9d-147ae2cd3257" Aug 13 07:07:33.028400 containerd[1472]: time="2025-08-13T07:07:33.028333749Z" level=error msg="Failed to destroy network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.028915 containerd[1472]: time="2025-08-13T07:07:33.028821726Z" level=error msg="encountered an error cleaning up failed sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.029218 containerd[1472]: time="2025-08-13T07:07:33.028927666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7m85,Uid:1e92778b-e18e-498e-93e6-ad7ba8ca5d17,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.029721 kubelet[2496]: E0813 07:07:33.029670 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.029830 kubelet[2496]: E0813 07:07:33.029759 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p7m85" Aug 13 07:07:33.029830 kubelet[2496]: E0813 07:07:33.029792 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p7m85" Aug 13 07:07:33.029939 kubelet[2496]: E0813 07:07:33.029869 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p7m85_kube-system(1e92778b-e18e-498e-93e6-ad7ba8ca5d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p7m85_kube-system(1e92778b-e18e-498e-93e6-ad7ba8ca5d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p7m85" podUID="1e92778b-e18e-498e-93e6-ad7ba8ca5d17" Aug 13 07:07:33.164188 containerd[1472]: time="2025-08-13T07:07:33.164113141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mqjrr,Uid:b305560b-20ae-4df3-8a79-002d16b6c79f,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:07:33.198945 containerd[1472]: time="2025-08-13T07:07:33.198545551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mccbf,Uid:0227ea35-1919-478f-af60-278b21232cf4,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:07:33.203729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a-shm.mount: Deactivated successfully. Aug 13 07:07:33.278535 kubelet[2496]: I0813 07:07:33.277945 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:33.295180 kubelet[2496]: I0813 07:07:33.292551 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:33.296496 containerd[1472]: time="2025-08-13T07:07:33.296448900Z" level=info msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" Aug 13 07:07:33.300437 containerd[1472]: time="2025-08-13T07:07:33.300375329Z" level=info msg="Ensure that sandbox 3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f in task-service has been cleanup successfully" Aug 13 07:07:33.300893 containerd[1472]: time="2025-08-13T07:07:33.300475298Z" level=info msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" Aug 13 07:07:33.304645 kubelet[2496]: I0813 07:07:33.304283 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:33.307691 containerd[1472]: time="2025-08-13T07:07:33.306843598Z" level=info msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" Aug 13 07:07:33.307691 containerd[1472]: time="2025-08-13T07:07:33.307201061Z" level=info msg="Ensure that sandbox 4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a in task-service has been cleanup successfully" Aug 13 07:07:33.309149 containerd[1472]: time="2025-08-13T07:07:33.309092211Z" level=info msg="Ensure that sandbox 3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac in task-service has been cleanup successfully" Aug 13 07:07:33.318174 kubelet[2496]: I0813 07:07:33.318135 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:33.320117 containerd[1472]: time="2025-08-13T07:07:33.319989119Z" level=info msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" Aug 13 07:07:33.320286 containerd[1472]: time="2025-08-13T07:07:33.320198651Z" level=info msg="Ensure that sandbox 7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af in task-service has been cleanup successfully" Aug 13 07:07:33.338079 kubelet[2496]: I0813 07:07:33.337827 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:33.340416 containerd[1472]: time="2025-08-13T07:07:33.339337621Z" level=info msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" Aug 13 07:07:33.340416 containerd[1472]: time="2025-08-13T07:07:33.339813734Z" level=info msg="Ensure that sandbox 584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32 in task-service has been cleanup successfully" Aug 13 07:07:33.344240 kubelet[2496]: I0813 07:07:33.343667 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:33.346073 containerd[1472]: time="2025-08-13T07:07:33.345645619Z" level=info msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" Aug 13 07:07:33.351078 containerd[1472]: time="2025-08-13T07:07:33.350862525Z" level=info msg="Ensure that sandbox 6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb in task-service has been cleanup successfully" Aug 13 07:07:33.429635 containerd[1472]: time="2025-08-13T07:07:33.429560944Z" level=error msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" failed" error="failed to destroy network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.430148 kubelet[2496]: E0813 07:07:33.430104 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:33.430268 kubelet[2496]: E0813 07:07:33.430171 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac"} Aug 13 07:07:33.430268 kubelet[2496]: E0813 07:07:33.430260 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.430438 kubelet[2496]: E0813 07:07:33.430284 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75cb67d974-m5zrf" podUID="8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" Aug 13 07:07:33.489789 containerd[1472]: time="2025-08-13T07:07:33.489625352Z" level=error msg="Failed to destroy network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.492539 containerd[1472]: time="2025-08-13T07:07:33.490838212Z" level=error msg="Failed to destroy network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.495196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91-shm.mount: Deactivated successfully. Aug 13 07:07:33.499499 containerd[1472]: time="2025-08-13T07:07:33.499317598Z" level=error msg="encountered an error cleaning up failed sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.499499 containerd[1472]: time="2025-08-13T07:07:33.499424455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mqjrr,Uid:b305560b-20ae-4df3-8a79-002d16b6c79f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.501400 kubelet[2496]: E0813 07:07:33.500816 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.501400 kubelet[2496]: E0813 07:07:33.500911 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" Aug 13 07:07:33.501400 kubelet[2496]: E0813 07:07:33.500943 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" Aug 13 07:07:33.501855 kubelet[2496]: E0813 07:07:33.501013 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785ff45d87-mqjrr_calico-apiserver(b305560b-20ae-4df3-8a79-002d16b6c79f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785ff45d87-mqjrr_calico-apiserver(b305560b-20ae-4df3-8a79-002d16b6c79f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" podUID="b305560b-20ae-4df3-8a79-002d16b6c79f" Aug 13 07:07:33.502939 containerd[1472]: time="2025-08-13T07:07:33.502447388Z" level=error msg="encountered an error cleaning up failed sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.504607 containerd[1472]: time="2025-08-13T07:07:33.502899585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mccbf,Uid:0227ea35-1919-478f-af60-278b21232cf4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.510351 kubelet[2496]: E0813 07:07:33.510276 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.510750 kubelet[2496]: E0813 07:07:33.510564 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" Aug 13 07:07:33.510750 kubelet[2496]: E0813 07:07:33.510612 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" Aug 13 07:07:33.511662 kubelet[2496]: E0813 07:07:33.511034 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785ff45d87-mccbf_calico-apiserver(0227ea35-1919-478f-af60-278b21232cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785ff45d87-mccbf_calico-apiserver(0227ea35-1919-478f-af60-278b21232cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" podUID="0227ea35-1919-478f-af60-278b21232cf4" Aug 13 07:07:33.540146 containerd[1472]: time="2025-08-13T07:07:33.538857383Z" level=error msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" failed" error="failed to destroy network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.540288 kubelet[2496]: E0813 07:07:33.539613 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:33.540288 kubelet[2496]: E0813 07:07:33.539681 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a"} Aug 13 07:07:33.540288 kubelet[2496]: E0813 07:07:33.539738 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39ea9a35-bd67-4511-9bad-6e60fa944270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.540288 kubelet[2496]: E0813 07:07:33.540068 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39ea9a35-bd67-4511-9bad-6e60fa944270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4bpr" podUID="39ea9a35-bd67-4511-9bad-6e60fa944270" Aug 13 07:07:33.575544 containerd[1472]: time="2025-08-13T07:07:33.574904500Z" level=error msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" failed" error="failed to destroy network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.575745 kubelet[2496]: E0813 07:07:33.575264 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:33.575745 kubelet[2496]: E0813 07:07:33.575336 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af"} Aug 13 07:07:33.575745 kubelet[2496]: E0813 07:07:33.575390 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"553898de-3f11-4cc6-b330-03fffa4336bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.575745 kubelet[2496]: E0813 07:07:33.575423 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"553898de-3f11-4cc6-b330-03fffa4336bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tf4rp" podUID="553898de-3f11-4cc6-b330-03fffa4336bb" Aug 13 07:07:33.576470 containerd[1472]: time="2025-08-13T07:07:33.576327035Z" level=error msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" failed" error="failed to destroy network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.576767 kubelet[2496]: E0813 07:07:33.576656 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:33.576767 kubelet[2496]: E0813 07:07:33.576732 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32"} Aug 13 07:07:33.576921 kubelet[2496]: E0813 07:07:33.576783 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2150c293-bab3-40c4-95af-02ef0839fd92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.576921 kubelet[2496]: E0813 07:07:33.576817 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2150c293-bab3-40c4-95af-02ef0839fd92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" podUID="2150c293-bab3-40c4-95af-02ef0839fd92" Aug 13 07:07:33.583321 containerd[1472]: time="2025-08-13T07:07:33.582924351Z" level=error msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" failed" error="failed to destroy network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.583497 kubelet[2496]: E0813 07:07:33.583202 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:33.583497 kubelet[2496]: E0813 07:07:33.583268 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f"} Aug 13 07:07:33.583497 kubelet[2496]: E0813 07:07:33.583306 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8831262f-9738-4f38-9a9d-147ae2cd3257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.583497 kubelet[2496]: E0813 07:07:33.583332 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8831262f-9738-4f38-9a9d-147ae2cd3257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-j2p8w" podUID="8831262f-9738-4f38-9a9d-147ae2cd3257" Aug 13 07:07:33.586663 containerd[1472]: time="2025-08-13T07:07:33.586595132Z" level=error msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" failed" error="failed to destroy network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.587643 kubelet[2496]: E0813 07:07:33.587101 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:33.587872 kubelet[2496]: E0813 07:07:33.587670 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb"} Aug 13 07:07:33.587872 kubelet[2496]: E0813 07:07:33.587715 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e92778b-e18e-498e-93e6-ad7ba8ca5d17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.587872 kubelet[2496]: E0813 07:07:33.587753 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e92778b-e18e-498e-93e6-ad7ba8ca5d17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p7m85" podUID="1e92778b-e18e-498e-93e6-ad7ba8ca5d17" Aug 13 07:07:34.174263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591-shm.mount: Deactivated successfully. Aug 13 07:07:34.356128 kubelet[2496]: I0813 07:07:34.356073 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:34.357840 containerd[1472]: time="2025-08-13T07:07:34.357726246Z" level=info msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" Aug 13 07:07:34.359980 containerd[1472]: time="2025-08-13T07:07:34.359880912Z" level=info msg="Ensure that sandbox 94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91 in task-service has been cleanup successfully" Aug 13 07:07:34.361402 kubelet[2496]: I0813 07:07:34.361228 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:34.362996 containerd[1472]: time="2025-08-13T07:07:34.362790374Z" level=info msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" Aug 13 07:07:34.363819 containerd[1472]: time="2025-08-13T07:07:34.363786821Z" level=info msg="Ensure that sandbox 45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591 in task-service has been cleanup successfully" Aug 13 07:07:34.434977 containerd[1472]: time="2025-08-13T07:07:34.434819406Z" level=error msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" failed" error="failed to destroy network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:34.440913 kubelet[2496]: E0813 07:07:34.438670 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:34.440913 kubelet[2496]: E0813 07:07:34.440672 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591"} Aug 13 07:07:34.440913 kubelet[2496]: E0813 07:07:34.440779 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0227ea35-1919-478f-af60-278b21232cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:34.440913 kubelet[2496]: E0813 07:07:34.440818 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0227ea35-1919-478f-af60-278b21232cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" podUID="0227ea35-1919-478f-af60-278b21232cf4" Aug 13 07:07:34.459255 containerd[1472]: time="2025-08-13T07:07:34.459093900Z" level=error msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" failed" error="failed to destroy network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:34.459705 kubelet[2496]: E0813 07:07:34.459461 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:34.459705 kubelet[2496]: E0813 07:07:34.459558 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91"} Aug 13 07:07:34.459705 kubelet[2496]: E0813 07:07:34.459607 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b305560b-20ae-4df3-8a79-002d16b6c79f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:34.459705 kubelet[2496]: E0813 07:07:34.459649 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b305560b-20ae-4df3-8a79-002d16b6c79f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" podUID="b305560b-20ae-4df3-8a79-002d16b6c79f" Aug 13 07:07:39.673009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173884564.mount: Deactivated successfully. Aug 13 07:07:39.777531 containerd[1472]: time="2025-08-13T07:07:39.738332070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:07:39.786674 containerd[1472]: time="2025-08-13T07:07:39.786415952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.790627 containerd[1472]: time="2025-08-13T07:07:39.790350605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.484786517s" Aug 13 07:07:39.790627 containerd[1472]: time="2025-08-13T07:07:39.790447834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:07:39.821015 containerd[1472]: time="2025-08-13T07:07:39.820845834Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.866642 containerd[1472]: time="2025-08-13T07:07:39.866430833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.909558 containerd[1472]: time="2025-08-13T07:07:39.909319414Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:07:40.041649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049424849.mount: Deactivated successfully. Aug 13 07:07:40.049715 containerd[1472]: time="2025-08-13T07:07:40.049553351Z" level=info msg="CreateContainer within sandbox \"0982cf0a0514f8fcb91c330e97be204f0fcba97508091d82abb6678a8dfe0180\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b868793ffc5291c674cf5191f4f947e1ae7ca4a76f481f77fda6929339d88862\"" Aug 13 07:07:40.055619 containerd[1472]: time="2025-08-13T07:07:40.054924077Z" level=info msg="StartContainer for \"b868793ffc5291c674cf5191f4f947e1ae7ca4a76f481f77fda6929339d88862\"" Aug 13 07:07:40.237799 systemd[1]: Started cri-containerd-b868793ffc5291c674cf5191f4f947e1ae7ca4a76f481f77fda6929339d88862.scope - libcontainer container b868793ffc5291c674cf5191f4f947e1ae7ca4a76f481f77fda6929339d88862. Aug 13 07:07:40.286545 containerd[1472]: time="2025-08-13T07:07:40.285379477Z" level=info msg="StartContainer for \"b868793ffc5291c674cf5191f4f947e1ae7ca4a76f481f77fda6929339d88862\" returns successfully" Aug 13 07:07:40.490919 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:07:40.492292 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:07:40.553318 kubelet[2496]: I0813 07:07:40.547434 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tqzmx" podStartSLOduration=1.2587473980000001 podStartE2EDuration="18.532033003s" podCreationTimestamp="2025-08-13 07:07:22 +0000 UTC" firstStartedPulling="2025-08-13 07:07:22.54808656 +0000 UTC m=+21.657115450" lastFinishedPulling="2025-08-13 07:07:39.821372164 +0000 UTC m=+38.930401055" observedRunningTime="2025-08-13 07:07:40.524398885 +0000 UTC m=+39.633427785" watchObservedRunningTime="2025-08-13 07:07:40.532033003 +0000 UTC m=+39.641061903" Aug 13 07:07:40.821122 containerd[1472]: time="2025-08-13T07:07:40.821073459Z" level=info msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.929 [INFO][3798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.931 [INFO][3798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" iface="eth0" netns="/var/run/netns/cni-35ab230e-d946-0e59-7ed3-8aa605ad5892" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.932 [INFO][3798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" iface="eth0" netns="/var/run/netns/cni-35ab230e-d946-0e59-7ed3-8aa605ad5892" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.934 [INFO][3798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" iface="eth0" netns="/var/run/netns/cni-35ab230e-d946-0e59-7ed3-8aa605ad5892" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.934 [INFO][3798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:40.934 [INFO][3798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.214 [INFO][3806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.217 [INFO][3806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.217 [INFO][3806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.235 [WARNING][3806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.235 [INFO][3806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.240 [INFO][3806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:41.262328 containerd[1472]: 2025-08-13 07:07:41.244 [INFO][3798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:07:41.263162 containerd[1472]: time="2025-08-13T07:07:41.263018496Z" level=info msg="TearDown network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" successfully" Aug 13 07:07:41.263162 containerd[1472]: time="2025-08-13T07:07:41.263062177Z" level=info msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" returns successfully" Aug 13 07:07:41.267333 systemd[1]: run-netns-cni\x2d35ab230e\x2dd946\x2d0e59\x2d7ed3\x2d8aa605ad5892.mount: Deactivated successfully. Aug 13 07:07:41.483951 kubelet[2496]: I0813 07:07:41.481445 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-backend-key-pair\") pod \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " Aug 13 07:07:41.483951 kubelet[2496]: I0813 07:07:41.481635 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xjmp\" (UniqueName: \"kubernetes.io/projected/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-kube-api-access-6xjmp\") pod \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " Aug 13 07:07:41.483951 kubelet[2496]: I0813 07:07:41.481706 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-ca-bundle\") pod \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\" (UID: \"8ea21fb6-01dc-4e57-a88f-d08fd9931ec8\") " Aug 13 07:07:41.504546 kubelet[2496]: I0813 07:07:41.502894 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" (UID: "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:07:41.523652 kubelet[2496]: I0813 07:07:41.516318 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" (UID: "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:07:41.525438 systemd[1]: var-lib-kubelet-pods-8ea21fb6\x2d01dc\x2d4e57\x2da88f\x2dd08fd9931ec8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:07:41.531601 kubelet[2496]: I0813 07:07:41.523806 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-kube-api-access-6xjmp" (OuterVolumeSpecName: "kube-api-access-6xjmp") pod "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" (UID: "8ea21fb6-01dc-4e57-a88f-d08fd9931ec8"). InnerVolumeSpecName "kube-api-access-6xjmp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:07:41.544021 systemd[1]: var-lib-kubelet-pods-8ea21fb6\x2d01dc\x2d4e57\x2da88f\x2dd08fd9931ec8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6xjmp.mount: Deactivated successfully. Aug 13 07:07:41.582878 kubelet[2496]: I0813 07:07:41.582820 2496 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6xjmp\" (UniqueName: \"kubernetes.io/projected/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-kube-api-access-6xjmp\") on node \"ci-4081.3.5-4-06119f59db\" DevicePath \"\"" Aug 13 07:07:41.582878 kubelet[2496]: I0813 07:07:41.582862 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-ca-bundle\") on node \"ci-4081.3.5-4-06119f59db\" DevicePath \"\"" Aug 13 07:07:41.582878 kubelet[2496]: I0813 07:07:41.582875 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8-whisker-backend-key-pair\") on node \"ci-4081.3.5-4-06119f59db\" DevicePath \"\"" Aug 13 07:07:41.727635 systemd[1]: Removed slice kubepods-besteffort-pod8ea21fb6_01dc_4e57_a88f_d08fd9931ec8.slice - libcontainer container kubepods-besteffort-pod8ea21fb6_01dc_4e57_a88f_d08fd9931ec8.slice. Aug 13 07:07:41.839299 systemd[1]: Created slice kubepods-besteffort-podb88f2f45_7a6b_4d8b_9c4b_dbe09620f98c.slice - libcontainer container kubepods-besteffort-podb88f2f45_7a6b_4d8b_9c4b_dbe09620f98c.slice. Aug 13 07:07:41.985998 kubelet[2496]: I0813 07:07:41.985919 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c-whisker-ca-bundle\") pod \"whisker-66ff75c594-82k8f\" (UID: \"b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c\") " pod="calico-system/whisker-66ff75c594-82k8f" Aug 13 07:07:41.985998 kubelet[2496]: I0813 07:07:41.985970 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9vv\" (UniqueName: \"kubernetes.io/projected/b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c-kube-api-access-ck9vv\") pod \"whisker-66ff75c594-82k8f\" (UID: \"b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c\") " pod="calico-system/whisker-66ff75c594-82k8f" Aug 13 07:07:41.985998 kubelet[2496]: I0813 07:07:41.985999 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c-whisker-backend-key-pair\") pod \"whisker-66ff75c594-82k8f\" (UID: \"b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c\") " pod="calico-system/whisker-66ff75c594-82k8f" Aug 13 07:07:42.145292 containerd[1472]: time="2025-08-13T07:07:42.145017460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66ff75c594-82k8f,Uid:b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:42.366487 systemd-networkd[1367]: cali424f7abf72d: Link UP Aug 13 07:07:42.366781 systemd-networkd[1367]: cali424f7abf72d: Gained carrier Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.199 [INFO][3847] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.221 [INFO][3847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0 whisker-66ff75c594- calico-system b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c 907 0 2025-08-13 07:07:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66ff75c594 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db whisker-66ff75c594-82k8f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali424f7abf72d [] [] }} ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.221 [INFO][3847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.265 [INFO][3859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" HandleID="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.265 [INFO][3859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" HandleID="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"whisker-66ff75c594-82k8f", "timestamp":"2025-08-13 07:07:42.265570449 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.265 [INFO][3859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.266 [INFO][3859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.266 [INFO][3859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.280 [INFO][3859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.296 [INFO][3859] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.304 [INFO][3859] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.308 [INFO][3859] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.312 [INFO][3859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.312 [INFO][3859] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.315 [INFO][3859] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8 Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.322 [INFO][3859] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.331 [INFO][3859] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.1/26] block=192.168.21.0/26 handle="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.332 [INFO][3859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.1/26] handle="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.332 [INFO][3859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:42.399455 containerd[1472]: 2025-08-13 07:07:42.332 [INFO][3859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.1/26] IPv6=[] ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" HandleID="k8s-pod-network.3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.338 [INFO][3847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0", GenerateName:"whisker-66ff75c594-", Namespace:"calico-system", SelfLink:"", UID:"b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66ff75c594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"whisker-66ff75c594-82k8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali424f7abf72d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.338 [INFO][3847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.1/32] ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.338 [INFO][3847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali424f7abf72d ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.359 [INFO][3847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.361 [INFO][3847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0", GenerateName:"whisker-66ff75c594-", Namespace:"calico-system", SelfLink:"", UID:"b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66ff75c594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8", Pod:"whisker-66ff75c594-82k8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali424f7abf72d", MAC:"96:6a:d6:17:86:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:42.401814 containerd[1472]: 2025-08-13 07:07:42.391 [INFO][3847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8" Namespace="calico-system" Pod="whisker-66ff75c594-82k8f" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--66ff75c594--82k8f-eth0" Aug 13 07:07:42.485355 containerd[1472]: time="2025-08-13T07:07:42.484007777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:42.485355 containerd[1472]: time="2025-08-13T07:07:42.484113254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:42.485355 containerd[1472]: time="2025-08-13T07:07:42.484129364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:42.485355 containerd[1472]: time="2025-08-13T07:07:42.484255346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:42.578998 systemd[1]: Started cri-containerd-3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8.scope - libcontainer container 3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8. Aug 13 07:07:42.753140 containerd[1472]: time="2025-08-13T07:07:42.751155442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66ff75c594-82k8f,Uid:b88f2f45-7a6b-4d8b-9c4b-dbe09620f98c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8\"" Aug 13 07:07:42.760168 containerd[1472]: time="2025-08-13T07:07:42.760087635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:07:43.075627 kubelet[2496]: I0813 07:07:43.075460 2496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea21fb6-01dc-4e57-a88f-d08fd9931ec8" path="/var/lib/kubelet/pods/8ea21fb6-01dc-4e57-a88f-d08fd9931ec8/volumes" Aug 13 07:07:44.103563 containerd[1472]: time="2025-08-13T07:07:44.103273722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:44.106690 containerd[1472]: time="2025-08-13T07:07:44.106611592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:07:44.107735 containerd[1472]: time="2025-08-13T07:07:44.107227081Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:44.112768 containerd[1472]: time="2025-08-13T07:07:44.112703766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.352539774s" Aug 13 07:07:44.112983 containerd[1472]: time="2025-08-13T07:07:44.112941034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:07:44.114543 containerd[1472]: time="2025-08-13T07:07:44.111954632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:44.117733 containerd[1472]: time="2025-08-13T07:07:44.117695052Z" level=info msg="CreateContainer within sandbox \"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:07:44.133158 containerd[1472]: time="2025-08-13T07:07:44.133110792Z" level=info msg="CreateContainer within sandbox \"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b\"" Aug 13 07:07:44.136609 containerd[1472]: time="2025-08-13T07:07:44.135283909Z" level=info msg="StartContainer for \"fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b\"" Aug 13 07:07:44.191071 systemd[1]: run-containerd-runc-k8s.io-fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b-runc.1ghWMs.mount: Deactivated successfully. Aug 13 07:07:44.200130 systemd[1]: Started cri-containerd-fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b.scope - libcontainer container fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b. Aug 13 07:07:44.260097 systemd-networkd[1367]: cali424f7abf72d: Gained IPv6LL Aug 13 07:07:44.295983 containerd[1472]: time="2025-08-13T07:07:44.295780816Z" level=info msg="StartContainer for \"fa12251dbfdd26d9fa1628a6f32dee59bb977e34b1a55e2dcb2c5292de99d81b\" returns successfully" Aug 13 07:07:44.300568 containerd[1472]: time="2025-08-13T07:07:44.300041774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:07:46.069667 containerd[1472]: time="2025-08-13T07:07:46.069528176Z" level=info msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" Aug 13 07:07:46.074957 containerd[1472]: time="2025-08-13T07:07:46.073989752Z" level=info msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.282 [INFO][4136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.282 [INFO][4136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" iface="eth0" netns="/var/run/netns/cni-e4c808e6-635c-7860-fc16-62e13dee468b" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.282 [INFO][4136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" iface="eth0" netns="/var/run/netns/cni-e4c808e6-635c-7860-fc16-62e13dee468b" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" iface="eth0" netns="/var/run/netns/cni-e4c808e6-635c-7860-fc16-62e13dee468b" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.351 [INFO][4150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.351 [INFO][4150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.351 [INFO][4150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.361 [WARNING][4150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.361 [INFO][4150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.364 [INFO][4150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:46.392572 containerd[1472]: 2025-08-13 07:07:46.374 [INFO][4136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:07:46.392572 containerd[1472]: time="2025-08-13T07:07:46.390283627Z" level=info msg="TearDown network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" successfully" Aug 13 07:07:46.392572 containerd[1472]: time="2025-08-13T07:07:46.390327349Z" level=info msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" returns successfully" Aug 13 07:07:46.398369 kubelet[2496]: E0813 07:07:46.397156 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:46.400684 containerd[1472]: time="2025-08-13T07:07:46.400330086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tf4rp,Uid:553898de-3f11-4cc6-b330-03fffa4336bb,Namespace:kube-system,Attempt:1,}" Aug 13 07:07:46.400965 systemd[1]: run-netns-cni\x2de4c808e6\x2d635c\x2d7860\x2dfc16\x2d62e13dee468b.mount: Deactivated successfully. Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.278 [INFO][4135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.280 [INFO][4135] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" iface="eth0" netns="/var/run/netns/cni-c7d48ef9-4f46-7557-0c45-b2e9bf6dd1aa" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.283 [INFO][4135] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" iface="eth0" netns="/var/run/netns/cni-c7d48ef9-4f46-7557-0c45-b2e9bf6dd1aa" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4135] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" iface="eth0" netns="/var/run/netns/cni-c7d48ef9-4f46-7557-0c45-b2e9bf6dd1aa" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.285 [INFO][4135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.384 [INFO][4148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.384 [INFO][4148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.385 [INFO][4148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.406 [WARNING][4148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.406 [INFO][4148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.412 [INFO][4148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:46.430777 containerd[1472]: 2025-08-13 07:07:46.423 [INFO][4135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:07:46.436734 containerd[1472]: time="2025-08-13T07:07:46.436698772Z" level=info msg="TearDown network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" successfully" Aug 13 07:07:46.436864 containerd[1472]: time="2025-08-13T07:07:46.436850305Z" level=info msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" returns successfully" Aug 13 07:07:46.439106 systemd[1]: run-netns-cni\x2dc7d48ef9\x2d4f46\x2d7557\x2d0c45\x2db2e9bf6dd1aa.mount: Deactivated successfully. Aug 13 07:07:46.440713 containerd[1472]: time="2025-08-13T07:07:46.440304744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j2p8w,Uid:8831262f-9738-4f38-9a9d-147ae2cd3257,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:46.804753 systemd-networkd[1367]: calidc9c347ccc5: Link UP Aug 13 07:07:46.810305 systemd-networkd[1367]: calidc9c347ccc5: Gained carrier Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.562 [INFO][4172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.599 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0 coredns-668d6bf9bc- kube-system 553898de-3f11-4cc6-b330-03fffa4336bb 928 0 2025-08-13 07:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db coredns-668d6bf9bc-tf4rp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc9c347ccc5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.599 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.661 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" HandleID="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.663 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" HandleID="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"coredns-668d6bf9bc-tf4rp", "timestamp":"2025-08-13 07:07:46.659779652 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.663 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.663 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.663 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.688 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.701 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.716 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.722 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.738 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.738 [INFO][4200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.748 [INFO][4200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0 Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.755 [INFO][4200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.775 [INFO][4200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.2/26] block=192.168.21.0/26 handle="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.776 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.2/26] handle="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.776 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:46.892026 containerd[1472]: 2025-08-13 07:07:46.777 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.2/26] IPv6=[] ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" HandleID="k8s-pod-network.79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.789 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"553898de-3f11-4cc6-b330-03fffa4336bb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"coredns-668d6bf9bc-tf4rp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc9c347ccc5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.790 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.2/32] ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.791 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc9c347ccc5 ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.813 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.814 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"553898de-3f11-4cc6-b330-03fffa4336bb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0", Pod:"coredns-668d6bf9bc-tf4rp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc9c347ccc5", MAC:"9a:4b:31:81:2a:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:46.895106 containerd[1472]: 2025-08-13 07:07:46.866 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tf4rp" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:07:47.005750 containerd[1472]: time="2025-08-13T07:07:47.005480601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:47.006070 containerd[1472]: time="2025-08-13T07:07:47.006016371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:47.006070 containerd[1472]: time="2025-08-13T07:07:47.006049907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.007927 containerd[1472]: time="2025-08-13T07:07:47.006460538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.053696 systemd-networkd[1367]: cali61de62262c5: Link UP Aug 13 07:07:47.058816 systemd-networkd[1367]: cali61de62262c5: Gained carrier Aug 13 07:07:47.069765 containerd[1472]: time="2025-08-13T07:07:47.068799399Z" level=info msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" Aug 13 07:07:47.071639 containerd[1472]: time="2025-08-13T07:07:47.070922977Z" level=info msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" Aug 13 07:07:47.124780 systemd[1]: Started cri-containerd-79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0.scope - libcontainer container 79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0. Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.741 [INFO][4184] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.779 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0 goldmane-768f4c5c69- calico-system 8831262f-9738-4f38-9a9d-147ae2cd3257 927 0 2025-08-13 07:07:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db goldmane-768f4c5c69-j2p8w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali61de62262c5 [] [] }} ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.779 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.911 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" HandleID="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.911 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" HandleID="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"goldmane-768f4c5c69-j2p8w", "timestamp":"2025-08-13 07:07:46.909530981 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.911 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.911 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.911 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.934 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.953 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.966 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.973 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.982 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.982 [INFO][4211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:46.987 [INFO][4211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5 Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:47.001 [INFO][4211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:47.022 [INFO][4211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.3/26] block=192.168.21.0/26 handle="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:47.023 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.3/26] handle="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:47.023 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.133405 containerd[1472]: 2025-08-13 07:07:47.023 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.3/26] IPv6=[] ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" HandleID="k8s-pod-network.6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.034 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8831262f-9738-4f38-9a9d-147ae2cd3257", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"goldmane-768f4c5c69-j2p8w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61de62262c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.035 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.3/32] ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.035 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61de62262c5 ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.068 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.068 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8831262f-9738-4f38-9a9d-147ae2cd3257", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5", Pod:"goldmane-768f4c5c69-j2p8w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61de62262c5", MAC:"92:ad:f8:5a:e8:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.137472 containerd[1472]: 2025-08-13 07:07:47.097 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5" Namespace="calico-system" Pod="goldmane-768f4c5c69-j2p8w" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:07:47.251767 containerd[1472]: time="2025-08-13T07:07:47.251719003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tf4rp,Uid:553898de-3f11-4cc6-b330-03fffa4336bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0\"" Aug 13 07:07:47.260060 kubelet[2496]: E0813 07:07:47.259580 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:47.273476 containerd[1472]: time="2025-08-13T07:07:47.273097427Z" level=info msg="CreateContainer within sandbox \"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:07:47.280810 containerd[1472]: time="2025-08-13T07:07:47.271999983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:47.280810 containerd[1472]: time="2025-08-13T07:07:47.274321738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:47.280810 containerd[1472]: time="2025-08-13T07:07:47.274394812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.285081 containerd[1472]: time="2025-08-13T07:07:47.284841077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.348766 systemd[1]: Started cri-containerd-6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5.scope - libcontainer container 6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5. Aug 13 07:07:47.361278 containerd[1472]: time="2025-08-13T07:07:47.361155899Z" level=info msg="CreateContainer within sandbox \"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7160329910293eb19214fb38747616353bd17ca917241949abe61cd9b3fa1196\"" Aug 13 07:07:47.365969 containerd[1472]: time="2025-08-13T07:07:47.365860382Z" level=info msg="StartContainer for \"7160329910293eb19214fb38747616353bd17ca917241949abe61cd9b3fa1196\"" Aug 13 07:07:47.483416 systemd[1]: Started cri-containerd-7160329910293eb19214fb38747616353bd17ca917241949abe61cd9b3fa1196.scope - libcontainer container 7160329910293eb19214fb38747616353bd17ca917241949abe61cd9b3fa1196. Aug 13 07:07:47.514296 containerd[1472]: time="2025-08-13T07:07:47.511535681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:47.514296 containerd[1472]: time="2025-08-13T07:07:47.512529587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:07:47.514296 containerd[1472]: time="2025-08-13T07:07:47.513287585Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:47.517655 containerd[1472]: time="2025-08-13T07:07:47.517202135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:47.520565 containerd[1472]: time="2025-08-13T07:07:47.520037778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.219938662s" Aug 13 07:07:47.520565 containerd[1472]: time="2025-08-13T07:07:47.520085719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:07:47.526529 containerd[1472]: time="2025-08-13T07:07:47.526169712Z" level=info msg="CreateContainer within sandbox \"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:07:47.527156 containerd[1472]: time="2025-08-13T07:07:47.526932408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j2p8w,Uid:8831262f-9738-4f38-9a9d-147ae2cd3257,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5\"" Aug 13 07:07:47.531686 containerd[1472]: time="2025-08-13T07:07:47.531636445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:07:47.551906 containerd[1472]: time="2025-08-13T07:07:47.549977688Z" level=info msg="CreateContainer within sandbox \"3a3de955fb4ba004d8eb655a081a71b21cabcfc8d56587c85b4048b5a3805cd8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"40d323fbe7fab782fcf0d5ede72e6774a2da645bd2ff8923355317ccb7f6aea3\"" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.316 [INFO][4270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.316 [INFO][4270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" iface="eth0" netns="/var/run/netns/cni-fb750698-568f-82be-4975-f90a6f8d4a6b" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.319 [INFO][4270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" iface="eth0" netns="/var/run/netns/cni-fb750698-568f-82be-4975-f90a6f8d4a6b" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.330 [INFO][4270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" iface="eth0" netns="/var/run/netns/cni-fb750698-568f-82be-4975-f90a6f8d4a6b" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.334 [INFO][4270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.334 [INFO][4270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.508 [INFO][4329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.508 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.509 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.538 [WARNING][4329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.538 [INFO][4329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.544 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.551906 containerd[1472]: 2025-08-13 07:07:47.547 [INFO][4270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:07:47.551906 containerd[1472]: time="2025-08-13T07:07:47.551747086Z" level=info msg="TearDown network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" successfully" Aug 13 07:07:47.551906 containerd[1472]: time="2025-08-13T07:07:47.551793758Z" level=info msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" returns successfully" Aug 13 07:07:47.554882 containerd[1472]: time="2025-08-13T07:07:47.554284815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4bpr,Uid:39ea9a35-bd67-4511-9bad-6e60fa944270,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:47.557895 containerd[1472]: time="2025-08-13T07:07:47.556662896Z" level=info msg="StartContainer for \"40d323fbe7fab782fcf0d5ede72e6774a2da645bd2ff8923355317ccb7f6aea3\"" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.368 [INFO][4285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.369 [INFO][4285] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" iface="eth0" netns="/var/run/netns/cni-13ce5b16-6cd4-a685-0e66-af14276dd21d" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.370 [INFO][4285] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" iface="eth0" netns="/var/run/netns/cni-13ce5b16-6cd4-a685-0e66-af14276dd21d" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.374 [INFO][4285] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" iface="eth0" netns="/var/run/netns/cni-13ce5b16-6cd4-a685-0e66-af14276dd21d" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.374 [INFO][4285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.374 [INFO][4285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.515 [INFO][4338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.517 [INFO][4338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.543 [INFO][4338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.565 [WARNING][4338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.565 [INFO][4338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.570 [INFO][4338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.584986 containerd[1472]: 2025-08-13 07:07:47.579 [INFO][4285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:07:47.585740 containerd[1472]: time="2025-08-13T07:07:47.585229929Z" level=info msg="TearDown network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" successfully" Aug 13 07:07:47.585740 containerd[1472]: time="2025-08-13T07:07:47.585269482Z" level=info msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" returns successfully" Aug 13 07:07:47.586409 containerd[1472]: time="2025-08-13T07:07:47.586363753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mccbf,Uid:0227ea35-1919-478f-af60-278b21232cf4,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:07:47.616172 containerd[1472]: time="2025-08-13T07:07:47.615909762Z" level=info msg="StartContainer for \"7160329910293eb19214fb38747616353bd17ca917241949abe61cd9b3fa1196\" returns successfully" Aug 13 07:07:47.666180 systemd[1]: Started cri-containerd-40d323fbe7fab782fcf0d5ede72e6774a2da645bd2ff8923355317ccb7f6aea3.scope - libcontainer container 40d323fbe7fab782fcf0d5ede72e6774a2da645bd2ff8923355317ccb7f6aea3. Aug 13 07:07:47.838062 containerd[1472]: time="2025-08-13T07:07:47.837488982Z" level=info msg="StartContainer for \"40d323fbe7fab782fcf0d5ede72e6774a2da645bd2ff8923355317ccb7f6aea3\" returns successfully" Aug 13 07:07:47.931891 systemd-networkd[1367]: cali8c2196893a1: Link UP Aug 13 07:07:47.938027 systemd-networkd[1367]: cali8c2196893a1: Gained carrier Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.684 [INFO][4390] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.715 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0 csi-node-driver- calico-system 39ea9a35-bd67-4511-9bad-6e60fa944270 944 0 2025-08-13 07:07:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db csi-node-driver-k4bpr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c2196893a1 [] [] }} ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.715 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.792 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" HandleID="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.793 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" HandleID="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"csi-node-driver-k4bpr", "timestamp":"2025-08-13 07:07:47.791476929 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.793 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.793 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.793 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.828 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.844 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.856 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.861 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.866 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.866 [INFO][4447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.871 [INFO][4447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98 Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.883 [INFO][4447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.894 [INFO][4447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.4/26] block=192.168.21.0/26 handle="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.895 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.4/26] handle="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.895 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.996793 containerd[1472]: 2025-08-13 07:07:47.895 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.4/26] IPv6=[] ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" HandleID="k8s-pod-network.e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.907 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39ea9a35-bd67-4511-9bad-6e60fa944270", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"csi-node-driver-k4bpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2196893a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.907 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.4/32] ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.907 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c2196893a1 ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.937 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.957 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39ea9a35-bd67-4511-9bad-6e60fa944270", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98", Pod:"csi-node-driver-k4bpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2196893a1", MAC:"ca:6d:14:d0:2f:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.001138 containerd[1472]: 2025-08-13 07:07:47.989 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98" Namespace="calico-system" Pod="csi-node-driver-k4bpr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:07:48.044956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417940343.mount: Deactivated successfully. Aug 13 07:07:48.045112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887904925.mount: Deactivated successfully. Aug 13 07:07:48.045217 systemd[1]: run-netns-cni\x2d13ce5b16\x2d6cd4\x2da685\x2d0e66\x2daf14276dd21d.mount: Deactivated successfully. Aug 13 07:07:48.045320 systemd[1]: run-netns-cni\x2dfb750698\x2d568f\x2d82be\x2d4975\x2df90a6f8d4a6b.mount: Deactivated successfully. Aug 13 07:07:48.073098 containerd[1472]: time="2025-08-13T07:07:48.072749621Z" level=info msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" Aug 13 07:07:48.075623 containerd[1472]: time="2025-08-13T07:07:48.075394661Z" level=info msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" Aug 13 07:07:48.104562 containerd[1472]: time="2025-08-13T07:07:48.103809756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:48.104562 containerd[1472]: time="2025-08-13T07:07:48.103905034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:48.104562 containerd[1472]: time="2025-08-13T07:07:48.103947308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.104562 containerd[1472]: time="2025-08-13T07:07:48.104098787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.197040 systemd[1]: Started cri-containerd-e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98.scope - libcontainer container e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98. Aug 13 07:07:48.228854 systemd-networkd[1367]: calie892baa2b2e: Link UP Aug 13 07:07:48.247426 systemd-networkd[1367]: calie892baa2b2e: Gained carrier Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.676 [INFO][4406] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.700 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0 calico-apiserver-785ff45d87- calico-apiserver 0227ea35-1919-478f-af60-278b21232cf4 945 0 2025-08-13 07:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785ff45d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db calico-apiserver-785ff45d87-mccbf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie892baa2b2e [] [] }} ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.700 [INFO][4406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.824 [INFO][4442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" HandleID="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.828 [INFO][4442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" HandleID="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030ceb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-4-06119f59db", "pod":"calico-apiserver-785ff45d87-mccbf", "timestamp":"2025-08-13 07:07:47.823903818 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.828 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.896 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.896 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.942 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:47.981 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.005 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.024 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.047 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.048 [INFO][4442] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.068 [INFO][4442] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218 Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.098 [INFO][4442] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.156 [INFO][4442] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.5/26] block=192.168.21.0/26 handle="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.160 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.5/26] handle="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.161 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:48.296413 containerd[1472]: 2025-08-13 07:07:48.164 [INFO][4442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.5/26] IPv6=[] ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" HandleID="k8s-pod-network.d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.187 [INFO][4406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0227ea35-1919-478f-af60-278b21232cf4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"calico-apiserver-785ff45d87-mccbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie892baa2b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.187 [INFO][4406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.5/32] ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.187 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie892baa2b2e ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.253 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.258 [INFO][4406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0227ea35-1919-478f-af60-278b21232cf4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218", Pod:"calico-apiserver-785ff45d87-mccbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie892baa2b2e", MAC:"de:6a:ed:29:7d:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.300395 containerd[1472]: 2025-08-13 07:07:48.280 [INFO][4406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mccbf" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:07:48.401915 containerd[1472]: time="2025-08-13T07:07:48.401699958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4bpr,Uid:39ea9a35-bd67-4511-9bad-6e60fa944270,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98\"" Aug 13 07:07:48.425798 containerd[1472]: time="2025-08-13T07:07:48.424209357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:48.425798 containerd[1472]: time="2025-08-13T07:07:48.424311178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:48.425798 containerd[1472]: time="2025-08-13T07:07:48.424329940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.427592 containerd[1472]: time="2025-08-13T07:07:48.426973553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.536436 systemd[1]: Started cri-containerd-d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218.scope - libcontainer container d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218. Aug 13 07:07:48.543262 systemd-networkd[1367]: cali61de62262c5: Gained IPv6LL Aug 13 07:07:48.642536 kubelet[2496]: E0813 07:07:48.642226 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:48.733107 systemd-networkd[1367]: calidc9c347ccc5: Gained IPv6LL Aug 13 07:07:48.756853 kubelet[2496]: I0813 07:07:48.756774 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66ff75c594-82k8f" podStartSLOduration=2.993035325 podStartE2EDuration="7.756741071s" podCreationTimestamp="2025-08-13 07:07:41 +0000 UTC" firstStartedPulling="2025-08-13 07:07:42.758154528 +0000 UTC m=+41.867183417" lastFinishedPulling="2025-08-13 07:07:47.521860273 +0000 UTC m=+46.630889163" observedRunningTime="2025-08-13 07:07:48.693392979 +0000 UTC m=+47.802421881" watchObservedRunningTime="2025-08-13 07:07:48.756741071 +0000 UTC m=+47.865769968" Aug 13 07:07:48.799274 containerd[1472]: time="2025-08-13T07:07:48.796884459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mccbf,Uid:0227ea35-1919-478f-af60-278b21232cf4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218\"" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.421 [INFO][4523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.422 [INFO][4523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" iface="eth0" netns="/var/run/netns/cni-41d95b4d-8193-c5fb-df72-d97e4cd1b105" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.423 [INFO][4523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" iface="eth0" netns="/var/run/netns/cni-41d95b4d-8193-c5fb-df72-d97e4cd1b105" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.426 [INFO][4523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" iface="eth0" netns="/var/run/netns/cni-41d95b4d-8193-c5fb-df72-d97e4cd1b105" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.426 [INFO][4523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.427 [INFO][4523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.722 [INFO][4592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.723 [INFO][4592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.728 [INFO][4592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.785 [WARNING][4592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.785 [INFO][4592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.790 [INFO][4592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:48.813675 containerd[1472]: 2025-08-13 07:07:48.807 [INFO][4523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:07:48.814646 containerd[1472]: time="2025-08-13T07:07:48.813868570Z" level=info msg="TearDown network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" successfully" Aug 13 07:07:48.814646 containerd[1472]: time="2025-08-13T07:07:48.813915849Z" level=info msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" returns successfully" Aug 13 07:07:48.846553 containerd[1472]: time="2025-08-13T07:07:48.844689372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb8c445f9-bjpjh,Uid:2150c293-bab3-40c4-95af-02ef0839fd92,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.461 [INFO][4522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.465 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" iface="eth0" netns="/var/run/netns/cni-134a1c69-d423-083c-e167-1e51032066a9" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.466 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" iface="eth0" netns="/var/run/netns/cni-134a1c69-d423-083c-e167-1e51032066a9" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.467 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" iface="eth0" netns="/var/run/netns/cni-134a1c69-d423-083c-e167-1e51032066a9" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.467 [INFO][4522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.467 [INFO][4522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.750 [INFO][4603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.753 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.790 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.821 [WARNING][4603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.821 [INFO][4603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.847 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:48.862244 containerd[1472]: 2025-08-13 07:07:48.853 [INFO][4522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:07:48.863859 containerd[1472]: time="2025-08-13T07:07:48.863799080Z" level=info msg="TearDown network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" successfully" Aug 13 07:07:48.863859 containerd[1472]: time="2025-08-13T07:07:48.863855530Z" level=info msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" returns successfully" Aug 13 07:07:48.864409 kubelet[2496]: E0813 07:07:48.864375 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:48.865120 containerd[1472]: time="2025-08-13T07:07:48.865079875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7m85,Uid:1e92778b-e18e-498e-93e6-ad7ba8ca5d17,Namespace:kube-system,Attempt:1,}" Aug 13 07:07:49.031502 systemd[1]: run-netns-cni\x2d134a1c69\x2dd423\x2d083c\x2de167\x2d1e51032066a9.mount: Deactivated successfully. Aug 13 07:07:49.035245 systemd[1]: run-netns-cni\x2d41d95b4d\x2d8193\x2dc5fb\x2ddf72\x2dd97e4cd1b105.mount: Deactivated successfully. Aug 13 07:07:49.097222 containerd[1472]: time="2025-08-13T07:07:49.096220972Z" level=info msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" Aug 13 07:07:49.350703 kubelet[2496]: I0813 07:07:49.350236 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tf4rp" podStartSLOduration=45.34967225 podStartE2EDuration="45.34967225s" podCreationTimestamp="2025-08-13 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:48.759231583 +0000 UTC m=+47.868260481" watchObservedRunningTime="2025-08-13 07:07:49.34967225 +0000 UTC m=+48.458701250" Aug 13 07:07:49.374756 systemd-networkd[1367]: cali25f8031ca06: Link UP Aug 13 07:07:49.375124 systemd-networkd[1367]: cali8c2196893a1: Gained IPv6LL Aug 13 07:07:49.388989 systemd-networkd[1367]: cali25f8031ca06: Gained carrier Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:48.957 [INFO][4636] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.006 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0 calico-kube-controllers-6fb8c445f9- calico-system 2150c293-bab3-40c4-95af-02ef0839fd92 961 0 2025-08-13 07:07:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fb8c445f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db calico-kube-controllers-6fb8c445f9-bjpjh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali25f8031ca06 [] [] }} ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.007 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.221 [INFO][4664] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" HandleID="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.229 [INFO][4664] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" HandleID="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000338fa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"calico-kube-controllers-6fb8c445f9-bjpjh", "timestamp":"2025-08-13 07:07:49.221154177 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.229 [INFO][4664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.229 [INFO][4664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.229 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.250 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.266 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.280 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.291 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.300 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.300 [INFO][4664] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.304 [INFO][4664] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.315 [INFO][4664] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.328 [INFO][4664] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.6/26] block=192.168.21.0/26 handle="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.328 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.6/26] handle="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.328 [INFO][4664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:49.456225 containerd[1472]: 2025-08-13 07:07:49.330 [INFO][4664] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.6/26] IPv6=[] ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" HandleID="k8s-pod-network.e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.342 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0", GenerateName:"calico-kube-controllers-6fb8c445f9-", Namespace:"calico-system", SelfLink:"", UID:"2150c293-bab3-40c4-95af-02ef0839fd92", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb8c445f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"calico-kube-controllers-6fb8c445f9-bjpjh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25f8031ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.342 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.6/32] ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.342 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25f8031ca06 ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.391 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.394 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0", GenerateName:"calico-kube-controllers-6fb8c445f9-", Namespace:"calico-system", SelfLink:"", UID:"2150c293-bab3-40c4-95af-02ef0839fd92", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb8c445f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a", Pod:"calico-kube-controllers-6fb8c445f9-bjpjh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25f8031ca06", MAC:"d6:c0:02:a9:d7:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:49.460859 containerd[1472]: 2025-08-13 07:07:49.440 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a" Namespace="calico-system" Pod="calico-kube-controllers-6fb8c445f9-bjpjh" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:07:49.587772 systemd-networkd[1367]: calie8e100c4cb0: Link UP Aug 13 07:07:49.594191 systemd-networkd[1367]: calie8e100c4cb0: Gained carrier Aug 13 07:07:49.660135 containerd[1472]: time="2025-08-13T07:07:49.658580980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:49.660135 containerd[1472]: time="2025-08-13T07:07:49.658965723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:49.660135 containerd[1472]: time="2025-08-13T07:07:49.659001893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:49.660135 containerd[1472]: time="2025-08-13T07:07:49.659157437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.039 [INFO][4654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.092 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0 coredns-668d6bf9bc- kube-system 1e92778b-e18e-498e-93e6-ad7ba8ca5d17 962 0 2025-08-13 07:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db coredns-668d6bf9bc-p7m85 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie8e100c4cb0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.092 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.256 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" HandleID="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.256 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" HandleID="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003326b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-4-06119f59db", "pod":"coredns-668d6bf9bc-p7m85", "timestamp":"2025-08-13 07:07:49.25660599 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.257 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.328 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.328 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.368 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.415 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.451 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.464 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.482 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.482 [INFO][4676] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.493 [INFO][4676] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670 Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.515 [INFO][4676] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.537 [INFO][4676] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.7/26] block=192.168.21.0/26 handle="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.538 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.7/26] handle="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.539 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:49.664137 containerd[1472]: 2025-08-13 07:07:49.539 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.7/26] IPv6=[] ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" HandleID="k8s-pod-network.560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.571 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e92778b-e18e-498e-93e6-ad7ba8ca5d17", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"coredns-668d6bf9bc-p7m85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e100c4cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.571 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.7/32] ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.571 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8e100c4cb0 ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.593 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.601 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e92778b-e18e-498e-93e6-ad7ba8ca5d17", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670", Pod:"coredns-668d6bf9bc-p7m85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e100c4cb0", MAC:"86:21:8f:08:5a:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:49.666481 containerd[1472]: 2025-08-13 07:07:49.647 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670" Namespace="kube-system" Pod="coredns-668d6bf9bc-p7m85" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:07:49.694403 systemd-networkd[1367]: calie892baa2b2e: Gained IPv6LL Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.359 [INFO][4684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.366 [INFO][4684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" iface="eth0" netns="/var/run/netns/cni-879c7bb9-30f2-a98d-2c84-8534771d4f68" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.371 [INFO][4684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" iface="eth0" netns="/var/run/netns/cni-879c7bb9-30f2-a98d-2c84-8534771d4f68" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.372 [INFO][4684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" iface="eth0" netns="/var/run/netns/cni-879c7bb9-30f2-a98d-2c84-8534771d4f68" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.372 [INFO][4684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.372 [INFO][4684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.656 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.665 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.667 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.698 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.699 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.705 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:49.737329 containerd[1472]: 2025-08-13 07:07:49.711 [INFO][4684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:07:49.742884 containerd[1472]: time="2025-08-13T07:07:49.741188946Z" level=info msg="TearDown network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" successfully" Aug 13 07:07:49.742884 containerd[1472]: time="2025-08-13T07:07:49.741245730Z" level=info msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" returns successfully" Aug 13 07:07:49.748075 systemd[1]: run-netns-cni\x2d879c7bb9\x2d30f2\x2da98d\x2d2c84\x2d8534771d4f68.mount: Deactivated successfully. Aug 13 07:07:49.777919 systemd[1]: Started cri-containerd-e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a.scope - libcontainer container e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a. Aug 13 07:07:49.806407 containerd[1472]: time="2025-08-13T07:07:49.806227055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mqjrr,Uid:b305560b-20ae-4df3-8a79-002d16b6c79f,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:07:49.851878 containerd[1472]: time="2025-08-13T07:07:49.842958916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:49.851878 containerd[1472]: time="2025-08-13T07:07:49.843626513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:49.851878 containerd[1472]: time="2025-08-13T07:07:49.843664968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:49.851878 containerd[1472]: time="2025-08-13T07:07:49.847029971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:49.855237 kubelet[2496]: E0813 07:07:49.855196 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:50.006107 systemd[1]: Started cri-containerd-560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670.scope - libcontainer container 560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670. Aug 13 07:07:50.274000 containerd[1472]: time="2025-08-13T07:07:50.273828422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7m85,Uid:1e92778b-e18e-498e-93e6-ad7ba8ca5d17,Namespace:kube-system,Attempt:1,} returns sandbox id \"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670\"" Aug 13 07:07:50.278566 kubelet[2496]: E0813 07:07:50.276240 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:50.287665 containerd[1472]: time="2025-08-13T07:07:50.287008914Z" level=info msg="CreateContainer within sandbox \"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:07:50.341228 containerd[1472]: time="2025-08-13T07:07:50.341163476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb8c445f9-bjpjh,Uid:2150c293-bab3-40c4-95af-02ef0839fd92,Namespace:calico-system,Attempt:1,} returns sandbox id \"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a\"" Aug 13 07:07:50.415365 containerd[1472]: time="2025-08-13T07:07:50.415308942Z" level=info msg="CreateContainer within sandbox \"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"372345602d69e5948ab457c83097b92180e403b14348ed8ba765f3f8ef1f14d7\"" Aug 13 07:07:50.421558 containerd[1472]: time="2025-08-13T07:07:50.420250203Z" level=info msg="StartContainer for \"372345602d69e5948ab457c83097b92180e403b14348ed8ba765f3f8ef1f14d7\"" Aug 13 07:07:50.456279 systemd-networkd[1367]: caliba4d4bbd84a: Link UP Aug 13 07:07:50.458827 systemd-networkd[1367]: caliba4d4bbd84a: Gained carrier Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.107 [INFO][4785] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.168 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0 calico-apiserver-785ff45d87- calico-apiserver b305560b-20ae-4df3-8a79-002d16b6c79f 982 0 2025-08-13 07:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785ff45d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-4-06119f59db calico-apiserver-785ff45d87-mqjrr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba4d4bbd84a [] [] }} ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.170 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.274 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" HandleID="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.277 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" HandleID="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-4-06119f59db", "pod":"calico-apiserver-785ff45d87-mqjrr", "timestamp":"2025-08-13 07:07:50.274460765 +0000 UTC"}, Hostname:"ci-4081.3.5-4-06119f59db", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.277 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.277 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.277 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-4-06119f59db' Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.299 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.309 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.323 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.329 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.350 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.0/26 host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.351 [INFO][4812] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.0/26 handle="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.363 [INFO][4812] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67 Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.385 [INFO][4812] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.0/26 handle="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.414 [INFO][4812] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.8/26] block=192.168.21.0/26 handle="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.419 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.8/26] handle="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" host="ci-4081.3.5-4-06119f59db" Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.419 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:50.525362 containerd[1472]: 2025-08-13 07:07:50.419 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.8/26] IPv6=[] ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" HandleID="k8s-pod-network.4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.442 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"b305560b-20ae-4df3-8a79-002d16b6c79f", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"", Pod:"calico-apiserver-785ff45d87-mqjrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba4d4bbd84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.443 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.8/32] ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.444 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba4d4bbd84a ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.461 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.465 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"b305560b-20ae-4df3-8a79-002d16b6c79f", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67", Pod:"calico-apiserver-785ff45d87-mqjrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba4d4bbd84a", MAC:"52:91:a7:4f:70:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:50.526382 containerd[1472]: 2025-08-13 07:07:50.506 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67" Namespace="calico-apiserver" Pod="calico-apiserver-785ff45d87-mqjrr" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:07:50.568924 kubelet[2496]: I0813 07:07:50.568190 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:07:50.569346 kubelet[2496]: E0813 07:07:50.568892 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:50.666113 systemd[1]: Started cri-containerd-372345602d69e5948ab457c83097b92180e403b14348ed8ba765f3f8ef1f14d7.scope - libcontainer container 372345602d69e5948ab457c83097b92180e403b14348ed8ba765f3f8ef1f14d7. Aug 13 07:07:50.719020 containerd[1472]: time="2025-08-13T07:07:50.718573338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:50.719020 containerd[1472]: time="2025-08-13T07:07:50.718678488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:50.719020 containerd[1472]: time="2025-08-13T07:07:50.718699607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:50.719393 containerd[1472]: time="2025-08-13T07:07:50.718860073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:50.781493 systemd-networkd[1367]: cali25f8031ca06: Gained IPv6LL Aug 13 07:07:50.871062 kubelet[2496]: E0813 07:07:50.871021 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:50.872937 kubelet[2496]: E0813 07:07:50.872899 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:50.913309 containerd[1472]: time="2025-08-13T07:07:50.913155501Z" level=info msg="StartContainer for \"372345602d69e5948ab457c83097b92180e403b14348ed8ba765f3f8ef1f14d7\" returns successfully" Aug 13 07:07:50.923811 systemd[1]: Started cri-containerd-4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67.scope - libcontainer container 4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67. Aug 13 07:07:51.357684 systemd-networkd[1367]: calie8e100c4cb0: Gained IPv6LL Aug 13 07:07:51.499633 containerd[1472]: time="2025-08-13T07:07:51.499351568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785ff45d87-mqjrr,Uid:b305560b-20ae-4df3-8a79-002d16b6c79f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67\"" Aug 13 07:07:51.741288 systemd-networkd[1367]: caliba4d4bbd84a: Gained IPv6LL Aug 13 07:07:51.884792 kubelet[2496]: E0813 07:07:51.883542 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:52.056158 kubelet[2496]: I0813 07:07:52.056010 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p7m85" podStartSLOduration=48.055987911 podStartE2EDuration="48.055987911s" podCreationTimestamp="2025-08-13 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:51.950325068 +0000 UTC m=+51.059353982" watchObservedRunningTime="2025-08-13 07:07:52.055987911 +0000 UTC m=+51.165016801" Aug 13 07:07:52.452795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902746917.mount: Deactivated successfully. Aug 13 07:07:52.689235 kernel: bpftool[4985]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:07:52.897146 kubelet[2496]: E0813 07:07:52.897087 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:53.281283 systemd-networkd[1367]: vxlan.calico: Link UP Aug 13 07:07:53.281295 systemd-networkd[1367]: vxlan.calico: Gained carrier Aug 13 07:07:53.727360 containerd[1472]: time="2025-08-13T07:07:53.725848968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.731400 containerd[1472]: time="2025-08-13T07:07:53.730376147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:07:53.733848 containerd[1472]: time="2025-08-13T07:07:53.733232617Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.748534 containerd[1472]: time="2025-08-13T07:07:53.747601079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.750601 containerd[1472]: time="2025-08-13T07:07:53.750559518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.218872617s" Aug 13 07:07:53.752215 containerd[1472]: time="2025-08-13T07:07:53.750760220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:07:53.755513 containerd[1472]: time="2025-08-13T07:07:53.754898780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:07:53.760127 containerd[1472]: time="2025-08-13T07:07:53.760083720Z" level=info msg="CreateContainer within sandbox \"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:07:53.820475 containerd[1472]: time="2025-08-13T07:07:53.820060242Z" level=info msg="CreateContainer within sandbox \"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d\"" Aug 13 07:07:53.823846 containerd[1472]: time="2025-08-13T07:07:53.823749769Z" level=info msg="StartContainer for \"1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d\"" Aug 13 07:07:53.881680 systemd[1]: Started cri-containerd-1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d.scope - libcontainer container 1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d. Aug 13 07:07:53.910643 kubelet[2496]: E0813 07:07:53.910197 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:07:54.002415 containerd[1472]: time="2025-08-13T07:07:54.001044415Z" level=info msg="StartContainer for \"1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d\" returns successfully" Aug 13 07:07:54.952170 systemd[1]: Started sshd@7-64.23.220.168:22-139.178.89.65:41448.service - OpenSSH per-connection server daemon (139.178.89.65:41448). Aug 13 07:07:55.196632 sshd[5111]: Accepted publickey for core from 139.178.89.65 port 41448 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:55.200013 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Aug 13 07:07:55.211746 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:55.226802 systemd-logind[1447]: New session 8 of user core. Aug 13 07:07:55.232794 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:07:55.441275 containerd[1472]: time="2025-08-13T07:07:55.441206958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:55.444541 containerd[1472]: time="2025-08-13T07:07:55.444448645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:07:55.457056 containerd[1472]: time="2025-08-13T07:07:55.456974494Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:55.461048 containerd[1472]: time="2025-08-13T07:07:55.460969950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:55.462195 containerd[1472]: time="2025-08-13T07:07:55.462096756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.707152177s" Aug 13 07:07:55.462195 containerd[1472]: time="2025-08-13T07:07:55.462154135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:07:55.464891 containerd[1472]: time="2025-08-13T07:07:55.464191703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:07:55.472240 containerd[1472]: time="2025-08-13T07:07:55.471535702Z" level=info msg="CreateContainer within sandbox \"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:07:55.531587 containerd[1472]: time="2025-08-13T07:07:55.527742753Z" level=info msg="CreateContainer within sandbox \"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1\"" Aug 13 07:07:55.533581 containerd[1472]: time="2025-08-13T07:07:55.532730225Z" level=info msg="StartContainer for \"af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1\"" Aug 13 07:07:55.616283 systemd[1]: Started cri-containerd-af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1.scope - libcontainer container af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1. Aug 13 07:07:55.689229 containerd[1472]: time="2025-08-13T07:07:55.688311310Z" level=info msg="StartContainer for \"af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1\" returns successfully" Aug 13 07:07:56.072197 sshd[5111]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:56.086114 systemd[1]: sshd@7-64.23.220.168:22-139.178.89.65:41448.service: Deactivated successfully. Aug 13 07:07:56.092289 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:07:56.096365 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:07:56.100055 systemd-logind[1447]: Removed session 8. Aug 13 07:07:56.109462 systemd[1]: run-containerd-runc-k8s.io-af0cc93f731a80044bd794b946da29dcfb8ad03ae4ae935a119ce6dbc07523d1-runc.ZfwZN9.mount: Deactivated successfully. Aug 13 07:07:58.363487 containerd[1472]: time="2025-08-13T07:07:58.363363077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.365192 containerd[1472]: time="2025-08-13T07:07:58.365126390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:07:58.366549 containerd[1472]: time="2025-08-13T07:07:58.365910225Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.368638 containerd[1472]: time="2025-08-13T07:07:58.368593522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.369500 containerd[1472]: time="2025-08-13T07:07:58.369460363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.905219191s" Aug 13 07:07:58.369712 containerd[1472]: time="2025-08-13T07:07:58.369685989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:07:58.371547 containerd[1472]: time="2025-08-13T07:07:58.371482450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:07:58.375310 containerd[1472]: time="2025-08-13T07:07:58.375234476Z" level=info msg="CreateContainer within sandbox \"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:07:58.406578 containerd[1472]: time="2025-08-13T07:07:58.406427926Z" level=info msg="CreateContainer within sandbox \"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439\"" Aug 13 07:07:58.408669 containerd[1472]: time="2025-08-13T07:07:58.407811727Z" level=info msg="StartContainer for \"bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439\"" Aug 13 07:07:58.462669 systemd[1]: run-containerd-runc-k8s.io-bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439-runc.3UguFF.mount: Deactivated successfully. Aug 13 07:07:58.475847 systemd[1]: Started cri-containerd-bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439.scope - libcontainer container bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439. Aug 13 07:07:58.546022 containerd[1472]: time="2025-08-13T07:07:58.545964332Z" level=info msg="StartContainer for \"bb2e3065c83f1b4cf3db229e49cc1765adcbd91b68ec5f3bfe1ee18b4dd73439\" returns successfully" Aug 13 07:07:59.001439 kubelet[2496]: I0813 07:07:59.001347 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785ff45d87-mccbf" podStartSLOduration=31.434143866 podStartE2EDuration="41.001323309s" podCreationTimestamp="2025-08-13 07:07:18 +0000 UTC" firstStartedPulling="2025-08-13 07:07:48.803659796 +0000 UTC m=+47.912688673" lastFinishedPulling="2025-08-13 07:07:58.370839225 +0000 UTC m=+57.479868116" observedRunningTime="2025-08-13 07:07:59.00096505 +0000 UTC m=+58.109993982" watchObservedRunningTime="2025-08-13 07:07:59.001323309 +0000 UTC m=+58.110352210" Aug 13 07:07:59.002196 kubelet[2496]: I0813 07:07:59.001765 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-j2p8w" podStartSLOduration=31.780028504 podStartE2EDuration="38.001753739s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:47.531016436 +0000 UTC m=+46.640045323" lastFinishedPulling="2025-08-13 07:07:53.752741663 +0000 UTC m=+52.861770558" observedRunningTime="2025-08-13 07:07:54.981959804 +0000 UTC m=+54.090988697" watchObservedRunningTime="2025-08-13 07:07:59.001753739 +0000 UTC m=+58.110782637" Aug 13 07:08:00.004254 kubelet[2496]: I0813 07:07:59.994638 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:01.111469 systemd[1]: Started sshd@8-64.23.220.168:22-139.178.89.65:39202.service - OpenSSH per-connection server daemon (139.178.89.65:39202). Aug 13 07:08:01.310622 sshd[5271]: Accepted publickey for core from 139.178.89.65 port 39202 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:01.315178 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:01.328805 systemd-logind[1447]: New session 9 of user core. Aug 13 07:08:01.335532 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:08:01.579821 containerd[1472]: time="2025-08-13T07:08:01.579750916Z" level=info msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" Aug 13 07:08:02.371615 sshd[5271]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:02.384970 systemd[1]: sshd@8-64.23.220.168:22-139.178.89.65:39202.service: Deactivated successfully. Aug 13 07:08:02.390379 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:08:02.395871 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:08:02.403113 systemd-logind[1447]: Removed session 9. Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.415 [WARNING][5291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0227ea35-1919-478f-af60-278b21232cf4", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218", Pod:"calico-apiserver-785ff45d87-mccbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie892baa2b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.420 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.420 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" iface="eth0" netns="" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.421 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.421 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.842 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.856 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.860 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.886 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.886 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.894 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:02.913356 containerd[1472]: 2025-08-13 07:08:02.904 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:02.917267 containerd[1472]: time="2025-08-13T07:08:02.915992556Z" level=info msg="TearDown network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" successfully" Aug 13 07:08:02.917267 containerd[1472]: time="2025-08-13T07:08:02.916039093Z" level=info msg="StopPodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" returns successfully" Aug 13 07:08:03.250409 containerd[1472]: time="2025-08-13T07:08:03.250269033Z" level=info msg="RemovePodSandbox for \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" Aug 13 07:08:03.270011 containerd[1472]: time="2025-08-13T07:08:03.269953158Z" level=info msg="Forcibly stopping sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\"" Aug 13 07:08:03.441527 containerd[1472]: time="2025-08-13T07:08:03.441404956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.503777 containerd[1472]: time="2025-08-13T07:08:03.445393675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:08:03.503777 containerd[1472]: time="2025-08-13T07:08:03.475659808Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.503777 containerd[1472]: time="2025-08-13T07:08:03.499226074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.127542485s" Aug 13 07:08:03.503777 containerd[1472]: time="2025-08-13T07:08:03.502795233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:08:03.504001 containerd[1472]: time="2025-08-13T07:08:03.503878830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.423 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0227ea35-1919-478f-af60-278b21232cf4", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"d8b78acbf4165c56e4551b56d22770d5b9d4ff7451847093e3c8763a1ad7b218", Pod:"calico-apiserver-785ff45d87-mccbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie892baa2b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.423 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.423 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" iface="eth0" netns="" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.423 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.423 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.536 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.540 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.541 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.553 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.553 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" HandleID="k8s-pod-network.45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mccbf-eth0" Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.559 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.573458 containerd[1472]: 2025-08-13 07:08:03.565 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591" Aug 13 07:08:03.573458 containerd[1472]: time="2025-08-13T07:08:03.573032428Z" level=info msg="TearDown network for sandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" successfully" Aug 13 07:08:03.586441 containerd[1472]: time="2025-08-13T07:08:03.586302127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:03.589839 containerd[1472]: time="2025-08-13T07:08:03.589457405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:08:03.615474 containerd[1472]: time="2025-08-13T07:08:03.614273446Z" level=info msg="RemovePodSandbox \"45964586509f77c34a3ec316a04c0e18b4a4d428286b676a463bae6550096591\" returns successfully" Aug 13 07:08:03.632950 containerd[1472]: time="2025-08-13T07:08:03.632883284Z" level=info msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" Aug 13 07:08:03.832394 containerd[1472]: time="2025-08-13T07:08:03.832333951Z" level=info msg="CreateContainer within sandbox \"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:08:03.898285 containerd[1472]: time="2025-08-13T07:08:03.898220245Z" level=info msg="CreateContainer within sandbox \"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301\"" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.826 [WARNING][5338] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8831262f-9738-4f38-9a9d-147ae2cd3257", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5", Pod:"goldmane-768f4c5c69-j2p8w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61de62262c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.827 [INFO][5338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.827 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" iface="eth0" netns="" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.827 [INFO][5338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.827 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.896 [INFO][5349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.897 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.897 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.909 [WARNING][5349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.909 [INFO][5349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.913 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.927590 containerd[1472]: 2025-08-13 07:08:03.921 [INFO][5338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:03.927590 containerd[1472]: time="2025-08-13T07:08:03.926614469Z" level=info msg="TearDown network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" successfully" Aug 13 07:08:03.927590 containerd[1472]: time="2025-08-13T07:08:03.926653136Z" level=info msg="StopPodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" returns successfully" Aug 13 07:08:03.940385 containerd[1472]: time="2025-08-13T07:08:03.940346928Z" level=info msg="StartContainer for \"a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301\"" Aug 13 07:08:03.941272 containerd[1472]: time="2025-08-13T07:08:03.940449365Z" level=info msg="RemovePodSandbox for \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" Aug 13 07:08:03.941272 containerd[1472]: time="2025-08-13T07:08:03.940865976Z" level=info msg="Forcibly stopping sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\"" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.029 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8831262f-9738-4f38-9a9d-147ae2cd3257", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"6b28a06ae27dfc476725ce7c4efddfa54e5192c32944c4f72a5892a7e54331d5", Pod:"goldmane-768f4c5c69-j2p8w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61de62262c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.031 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.032 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" iface="eth0" netns="" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.032 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.032 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.069 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.070 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.070 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.079 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.079 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" HandleID="k8s-pod-network.3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Workload="ci--4081.3.5--4--06119f59db-k8s-goldmane--768f4c5c69--j2p8w-eth0" Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.082 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.092366 containerd[1472]: 2025-08-13 07:08:04.089 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f" Aug 13 07:08:04.092366 containerd[1472]: time="2025-08-13T07:08:04.092034872Z" level=info msg="TearDown network for sandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" successfully" Aug 13 07:08:04.129773 containerd[1472]: time="2025-08-13T07:08:04.127959481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:04.129773 containerd[1472]: time="2025-08-13T07:08:04.128070237Z" level=info msg="RemovePodSandbox \"3b305243dc4eb1c1ef1b5526ed4aadf66741985414f145bfc6b3e10bc56cf25f\" returns successfully" Aug 13 07:08:04.130476 containerd[1472]: time="2025-08-13T07:08:04.130443442Z" level=info msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" Aug 13 07:08:04.137343 containerd[1472]: time="2025-08-13T07:08:04.137012163Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:04.138317 containerd[1472]: time="2025-08-13T07:08:04.138257223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:08:04.142163 containerd[1472]: time="2025-08-13T07:08:04.142120343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 552.588971ms" Aug 13 07:08:04.142645 containerd[1472]: time="2025-08-13T07:08:04.142315959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:08:04.145126 containerd[1472]: time="2025-08-13T07:08:04.144883281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:08:04.148248 containerd[1472]: time="2025-08-13T07:08:04.148200358Z" level=info msg="CreateContainer within sandbox \"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:08:04.175544 containerd[1472]: time="2025-08-13T07:08:04.175482795Z" level=info msg="CreateContainer within sandbox \"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82f47bfda9a7d96a5e20dbecdfc007573b992e865e7b3f9c3e78302c2794a9fb\"" Aug 13 07:08:04.181054 containerd[1472]: time="2025-08-13T07:08:04.180355653Z" level=info msg="StartContainer for \"82f47bfda9a7d96a5e20dbecdfc007573b992e865e7b3f9c3e78302c2794a9fb\"" Aug 13 07:08:04.319100 systemd[1]: Started cri-containerd-a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301.scope - libcontainer container a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301. Aug 13 07:08:04.332813 systemd[1]: Started cri-containerd-82f47bfda9a7d96a5e20dbecdfc007573b992e865e7b3f9c3e78302c2794a9fb.scope - libcontainer container 82f47bfda9a7d96a5e20dbecdfc007573b992e865e7b3f9c3e78302c2794a9fb. Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.246 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"b305560b-20ae-4df3-8a79-002d16b6c79f", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67", Pod:"calico-apiserver-785ff45d87-mqjrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba4d4bbd84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.249 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.249 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" iface="eth0" netns="" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.249 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.249 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.339 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.339 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.340 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.362 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.363 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.375 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.394224 containerd[1472]: 2025-08-13 07:08:04.383 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.394224 containerd[1472]: time="2025-08-13T07:08:04.394013067Z" level=info msg="TearDown network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" successfully" Aug 13 07:08:04.394224 containerd[1472]: time="2025-08-13T07:08:04.394038754Z" level=info msg="StopPodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" returns successfully" Aug 13 07:08:04.399713 containerd[1472]: time="2025-08-13T07:08:04.399652518Z" level=info msg="RemovePodSandbox for \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" Aug 13 07:08:04.399713 containerd[1472]: time="2025-08-13T07:08:04.399712678Z" level=info msg="Forcibly stopping sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\"" Aug 13 07:08:04.563919 containerd[1472]: time="2025-08-13T07:08:04.563785019Z" level=info msg="StartContainer for \"a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301\" returns successfully" Aug 13 07:08:04.593048 containerd[1472]: time="2025-08-13T07:08:04.592826887Z" level=info msg="StartContainer for \"82f47bfda9a7d96a5e20dbecdfc007573b992e865e7b3f9c3e78302c2794a9fb\" returns successfully" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.490 [WARNING][5458] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0", GenerateName:"calico-apiserver-785ff45d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"b305560b-20ae-4df3-8a79-002d16b6c79f", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785ff45d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"4fc3fb744a88ef09b3cf8d6cf8b9625ae7a9783f2ce4db488a5b832f5835bf67", Pod:"calico-apiserver-785ff45d87-mqjrr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba4d4bbd84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.491 [INFO][5458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.491 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" iface="eth0" netns="" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.491 [INFO][5458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.491 [INFO][5458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.608 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.608 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.608 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.623 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.624 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" HandleID="k8s-pod-network.94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--apiserver--785ff45d87--mqjrr-eth0" Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.627 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.632478 containerd[1472]: 2025-08-13 07:08:04.629 [INFO][5458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91" Aug 13 07:08:04.635664 containerd[1472]: time="2025-08-13T07:08:04.633454600Z" level=info msg="TearDown network for sandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" successfully" Aug 13 07:08:04.639326 containerd[1472]: time="2025-08-13T07:08:04.639022969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:04.639326 containerd[1472]: time="2025-08-13T07:08:04.639279087Z" level=info msg="RemovePodSandbox \"94df12da0d5756bddb7ae0382bece132fb5674309cac614d3583f4e0569c6b91\" returns successfully" Aug 13 07:08:04.640014 containerd[1472]: time="2025-08-13T07:08:04.639852756Z" level=info msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.737 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39ea9a35-bd67-4511-9bad-6e60fa944270", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98", Pod:"csi-node-driver-k4bpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2196893a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.739 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.740 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" iface="eth0" netns="" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.740 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.740 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.821 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.821 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.821 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.856 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.856 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.867 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.881792 containerd[1472]: 2025-08-13 07:08:04.871 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:04.881792 containerd[1472]: time="2025-08-13T07:08:04.881728880Z" level=info msg="TearDown network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" successfully" Aug 13 07:08:04.881792 containerd[1472]: time="2025-08-13T07:08:04.881765306Z" level=info msg="StopPodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" returns successfully" Aug 13 07:08:04.885865 containerd[1472]: time="2025-08-13T07:08:04.885075988Z" level=info msg="RemovePodSandbox for \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" Aug 13 07:08:04.885865 containerd[1472]: time="2025-08-13T07:08:04.885115865Z" level=info msg="Forcibly stopping sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\"" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:04.960 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39ea9a35-bd67-4511-9bad-6e60fa944270", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98", Pod:"csi-node-driver-k4bpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2196893a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:04.960 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:04.960 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" iface="eth0" netns="" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:04.960 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:04.960 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.015 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.015 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.015 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.025 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.026 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" HandleID="k8s-pod-network.4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Workload="ci--4081.3.5--4--06119f59db-k8s-csi--node--driver--k4bpr-eth0" Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.029 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.038051 containerd[1472]: 2025-08-13 07:08:05.034 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a" Aug 13 07:08:05.040807 containerd[1472]: time="2025-08-13T07:08:05.038179835Z" level=info msg="TearDown network for sandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" successfully" Aug 13 07:08:05.041781 containerd[1472]: time="2025-08-13T07:08:05.041733577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:05.041919 containerd[1472]: time="2025-08-13T07:08:05.041813674Z" level=info msg="RemovePodSandbox \"4f522e9d4407f3fcf8c260de25875c62dac34376e5872051575150e12524519a\" returns successfully" Aug 13 07:08:05.042439 containerd[1472]: time="2025-08-13T07:08:05.042402342Z" level=info msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.194 [WARNING][5559] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.194 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.194 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" iface="eth0" netns="" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.194 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.194 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.321 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.322 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.322 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.342 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.342 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.349 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.361852 containerd[1472]: 2025-08-13 07:08:05.357 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.364822 containerd[1472]: time="2025-08-13T07:08:05.361902627Z" level=info msg="TearDown network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" successfully" Aug 13 07:08:05.364822 containerd[1472]: time="2025-08-13T07:08:05.361946496Z" level=info msg="StopPodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" returns successfully" Aug 13 07:08:05.366649 containerd[1472]: time="2025-08-13T07:08:05.366494961Z" level=info msg="RemovePodSandbox for \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" Aug 13 07:08:05.366649 containerd[1472]: time="2025-08-13T07:08:05.366573458Z" level=info msg="Forcibly stopping sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\"" Aug 13 07:08:05.507540 kubelet[2496]: I0813 07:08:05.402843 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785ff45d87-mqjrr" podStartSLOduration=34.765074181 podStartE2EDuration="47.38924026s" podCreationTimestamp="2025-08-13 07:07:18 +0000 UTC" firstStartedPulling="2025-08-13 07:07:51.520089006 +0000 UTC m=+50.629117897" lastFinishedPulling="2025-08-13 07:08:04.144255084 +0000 UTC m=+63.253283976" observedRunningTime="2025-08-13 07:08:05.384808566 +0000 UTC m=+64.493837466" watchObservedRunningTime="2025-08-13 07:08:05.38924026 +0000 UTC m=+64.498269151" Aug 13 07:08:05.533555 kubelet[2496]: I0813 07:08:05.531570 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fb8c445f9-bjpjh" podStartSLOduration=30.289544522 podStartE2EDuration="43.531539662s" podCreationTimestamp="2025-08-13 07:07:22 +0000 UTC" firstStartedPulling="2025-08-13 07:07:50.353540506 +0000 UTC m=+49.462569384" lastFinishedPulling="2025-08-13 07:08:03.595535609 +0000 UTC m=+62.704564524" observedRunningTime="2025-08-13 07:08:05.507498689 +0000 UTC m=+64.616527588" watchObservedRunningTime="2025-08-13 07:08:05.531539662 +0000 UTC m=+64.640568557" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.490 [WARNING][5582] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" WorkloadEndpoint="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.490 [INFO][5582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.490 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" iface="eth0" netns="" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.490 [INFO][5582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.490 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.574 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.578 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.579 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.609 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.609 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" HandleID="k8s-pod-network.3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Workload="ci--4081.3.5--4--06119f59db-k8s-whisker--75cb67d974--m5zrf-eth0" Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.614 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.628235 containerd[1472]: 2025-08-13 07:08:05.621 [INFO][5582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac" Aug 13 07:08:05.628235 containerd[1472]: time="2025-08-13T07:08:05.627035958Z" level=info msg="TearDown network for sandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" successfully" Aug 13 07:08:05.653135 containerd[1472]: time="2025-08-13T07:08:05.653075512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:05.654644 containerd[1472]: time="2025-08-13T07:08:05.653291645Z" level=info msg="RemovePodSandbox \"3857c08b259d88007697555e2d64dd576e5e16cbe1bfeda227b9f87284c806ac\" returns successfully" Aug 13 07:08:05.657367 containerd[1472]: time="2025-08-13T07:08:05.657328186Z" level=info msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.756 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0", GenerateName:"calico-kube-controllers-6fb8c445f9-", Namespace:"calico-system", SelfLink:"", UID:"2150c293-bab3-40c4-95af-02ef0839fd92", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb8c445f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a", Pod:"calico-kube-controllers-6fb8c445f9-bjpjh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25f8031ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.757 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.757 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" iface="eth0" netns="" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.757 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.757 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.797 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.797 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.797 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.808 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.808 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.811 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.818918 containerd[1472]: 2025-08-13 07:08:05.813 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.823990 containerd[1472]: time="2025-08-13T07:08:05.819033951Z" level=info msg="TearDown network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" successfully" Aug 13 07:08:05.823990 containerd[1472]: time="2025-08-13T07:08:05.819072800Z" level=info msg="StopPodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" returns successfully" Aug 13 07:08:05.823990 containerd[1472]: time="2025-08-13T07:08:05.820366345Z" level=info msg="RemovePodSandbox for \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" Aug 13 07:08:05.823990 containerd[1472]: time="2025-08-13T07:08:05.820414178Z" level=info msg="Forcibly stopping sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\"" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.901 [WARNING][5626] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0", GenerateName:"calico-kube-controllers-6fb8c445f9-", Namespace:"calico-system", SelfLink:"", UID:"2150c293-bab3-40c4-95af-02ef0839fd92", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb8c445f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"e35e04a4d4be2924af1aee515ce4b49118a72e84b4bbe278f3fc20b03218435a", Pod:"calico-kube-controllers-6fb8c445f9-bjpjh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25f8031ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.901 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.901 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" iface="eth0" netns="" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.902 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.902 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.937 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.938 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.938 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.948 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.948 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" HandleID="k8s-pod-network.584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Workload="ci--4081.3.5--4--06119f59db-k8s-calico--kube--controllers--6fb8c445f9--bjpjh-eth0" Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.953 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.962500 containerd[1472]: 2025-08-13 07:08:05.956 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32" Aug 13 07:08:05.962500 containerd[1472]: time="2025-08-13T07:08:05.962444595Z" level=info msg="TearDown network for sandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" successfully" Aug 13 07:08:05.969286 containerd[1472]: time="2025-08-13T07:08:05.969226090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:05.969680 containerd[1472]: time="2025-08-13T07:08:05.969642968Z" level=info msg="RemovePodSandbox \"584d5690b23cb098950b4d58e934c6137fdb5d31634358f23ff195e537839a32\" returns successfully" Aug 13 07:08:05.970306 containerd[1472]: time="2025-08-13T07:08:05.970270781Z" level=info msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.040 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"553898de-3f11-4cc6-b330-03fffa4336bb", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0", Pod:"coredns-668d6bf9bc-tf4rp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc9c347ccc5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.041 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.041 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" iface="eth0" netns="" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.041 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.041 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.085 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.086 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.087 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.100 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.100 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.102 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:06.110065 containerd[1472]: 2025-08-13 07:08:06.107 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.111622 containerd[1472]: time="2025-08-13T07:08:06.110113343Z" level=info msg="TearDown network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" successfully" Aug 13 07:08:06.111622 containerd[1472]: time="2025-08-13T07:08:06.110138712Z" level=info msg="StopPodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" returns successfully" Aug 13 07:08:06.112222 containerd[1472]: time="2025-08-13T07:08:06.112177935Z" level=info msg="RemovePodSandbox for \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" Aug 13 07:08:06.112307 containerd[1472]: time="2025-08-13T07:08:06.112227207Z" level=info msg="Forcibly stopping sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\"" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.173 [WARNING][5669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"553898de-3f11-4cc6-b330-03fffa4336bb", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"79208a2e0af3733b1f538635ef5701bf101aaa8c48ae90e2a0c25f224a116de0", Pod:"coredns-668d6bf9bc-tf4rp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc9c347ccc5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.174 [INFO][5669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.174 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" iface="eth0" netns="" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.174 [INFO][5669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.174 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.279 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.279 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.279 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.295 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.295 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" HandleID="k8s-pod-network.7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--tf4rp-eth0" Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.307 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:06.323554 containerd[1472]: 2025-08-13 07:08:06.316 [INFO][5669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af" Aug 13 07:08:06.323554 containerd[1472]: time="2025-08-13T07:08:06.323460525Z" level=info msg="TearDown network for sandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" successfully" Aug 13 07:08:06.332342 containerd[1472]: time="2025-08-13T07:08:06.332261700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:06.333130 containerd[1472]: time="2025-08-13T07:08:06.332988320Z" level=info msg="RemovePodSandbox \"7421e6d3a82659fe59563ad9aa7ae97aefacc671c5f65cf698bcfd5c9084d1af\" returns successfully" Aug 13 07:08:06.335806 containerd[1472]: time="2025-08-13T07:08:06.335758520Z" level=info msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.474 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e92778b-e18e-498e-93e6-ad7ba8ca5d17", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670", Pod:"coredns-668d6bf9bc-p7m85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e100c4cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.475 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.475 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" iface="eth0" netns="" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.475 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.475 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.542 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.542 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.543 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.557 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.558 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.560 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:06.568348 containerd[1472]: 2025-08-13 07:08:06.565 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.570150 containerd[1472]: time="2025-08-13T07:08:06.568375262Z" level=info msg="TearDown network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" successfully" Aug 13 07:08:06.570150 containerd[1472]: time="2025-08-13T07:08:06.568401880Z" level=info msg="StopPodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" returns successfully" Aug 13 07:08:06.570150 containerd[1472]: time="2025-08-13T07:08:06.569042975Z" level=info msg="RemovePodSandbox for \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" Aug 13 07:08:06.570150 containerd[1472]: time="2025-08-13T07:08:06.569077937Z" level=info msg="Forcibly stopping sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\"" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.653 [WARNING][5728] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e92778b-e18e-498e-93e6-ad7ba8ca5d17", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-4-06119f59db", ContainerID:"560e5b21457168b783c05af4f0a2987576c0e73044f91cd20ece8a0eae55d670", Pod:"coredns-668d6bf9bc-p7m85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8e100c4cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.656 [INFO][5728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.658 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" iface="eth0" netns="" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.658 [INFO][5728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.658 [INFO][5728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.729 [INFO][5735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.730 [INFO][5735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.730 [INFO][5735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.746 [WARNING][5735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.747 [INFO][5735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" HandleID="k8s-pod-network.6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Workload="ci--4081.3.5--4--06119f59db-k8s-coredns--668d6bf9bc--p7m85-eth0" Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.753 [INFO][5735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:06.763952 containerd[1472]: 2025-08-13 07:08:06.758 [INFO][5728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb" Aug 13 07:08:06.766424 containerd[1472]: time="2025-08-13T07:08:06.765299632Z" level=info msg="TearDown network for sandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" successfully" Aug 13 07:08:06.776655 containerd[1472]: time="2025-08-13T07:08:06.776384248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:06.776655 containerd[1472]: time="2025-08-13T07:08:06.776488527Z" level=info msg="RemovePodSandbox \"6a0ca0cbb30f930688ff20c43b28440afffd81bad67ad32ca8ce5d76fc8dd0fb\" returns successfully" Aug 13 07:08:07.286015 containerd[1472]: time="2025-08-13T07:08:07.285949882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.288360 containerd[1472]: time="2025-08-13T07:08:07.287995174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:08:07.289018 containerd[1472]: time="2025-08-13T07:08:07.288954618Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.294621 containerd[1472]: time="2025-08-13T07:08:07.293952731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.296812 containerd[1472]: time="2025-08-13T07:08:07.296671985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.151743926s" Aug 13 07:08:07.297165 containerd[1472]: time="2025-08-13T07:08:07.297086431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:08:07.317550 containerd[1472]: time="2025-08-13T07:08:07.317471095Z" level=info msg="CreateContainer within sandbox \"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:08:07.351103 containerd[1472]: time="2025-08-13T07:08:07.351046750Z" level=info msg="CreateContainer within sandbox \"e4fcfb1b19aec2dae6e88782ce64f28e5d70d0012073c7297ebf2e660dee6b98\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"745468162ce5a7bb99238f14a39b6148e40e16e1928944c5bc3319a5bb7ea8f4\"" Aug 13 07:08:07.354570 containerd[1472]: time="2025-08-13T07:08:07.354447419Z" level=info msg="StartContainer for \"745468162ce5a7bb99238f14a39b6148e40e16e1928944c5bc3319a5bb7ea8f4\"" Aug 13 07:08:07.409839 systemd[1]: Started sshd@9-64.23.220.168:22-139.178.89.65:39210.service - OpenSSH per-connection server daemon (139.178.89.65:39210). Aug 13 07:08:07.484216 systemd[1]: Started cri-containerd-745468162ce5a7bb99238f14a39b6148e40e16e1928944c5bc3319a5bb7ea8f4.scope - libcontainer container 745468162ce5a7bb99238f14a39b6148e40e16e1928944c5bc3319a5bb7ea8f4. Aug 13 07:08:07.544151 kubelet[2496]: I0813 07:08:07.510517 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:07.637162 containerd[1472]: time="2025-08-13T07:08:07.637096774Z" level=info msg="StartContainer for \"745468162ce5a7bb99238f14a39b6148e40e16e1928944c5bc3319a5bb7ea8f4\" returns successfully" Aug 13 07:08:07.667551 sshd[5758]: Accepted publickey for core from 139.178.89.65 port 39210 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:07.671576 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:07.679461 systemd-logind[1447]: New session 10 of user core. Aug 13 07:08:07.687039 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:08:08.300278 kubelet[2496]: I0813 07:08:08.298675 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k4bpr" podStartSLOduration=27.407096692 podStartE2EDuration="46.298647557s" podCreationTimestamp="2025-08-13 07:07:22 +0000 UTC" firstStartedPulling="2025-08-13 07:07:48.407103665 +0000 UTC m=+47.516132560" lastFinishedPulling="2025-08-13 07:08:07.298654535 +0000 UTC m=+66.407683425" observedRunningTime="2025-08-13 07:08:08.291847881 +0000 UTC m=+67.400876779" watchObservedRunningTime="2025-08-13 07:08:08.298647557 +0000 UTC m=+67.407676455" Aug 13 07:08:08.515332 sshd[5758]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:08.534336 systemd[1]: sshd@9-64.23.220.168:22-139.178.89.65:39210.service: Deactivated successfully. Aug 13 07:08:08.539173 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:08:08.541042 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:08:08.550487 systemd[1]: Started sshd@10-64.23.220.168:22-139.178.89.65:39222.service - OpenSSH per-connection server daemon (139.178.89.65:39222). Aug 13 07:08:08.553284 systemd-logind[1447]: Removed session 10. Aug 13 07:08:08.607042 kubelet[2496]: I0813 07:08:08.600805 2496 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:08:08.608706 kubelet[2496]: I0813 07:08:08.608673 2496 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:08:08.673673 sshd[5807]: Accepted publickey for core from 139.178.89.65 port 39222 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:08.676688 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:08.686540 systemd-logind[1447]: New session 11 of user core. Aug 13 07:08:08.693765 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:08:08.978924 sshd[5807]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:08.994244 systemd[1]: sshd@10-64.23.220.168:22-139.178.89.65:39222.service: Deactivated successfully. Aug 13 07:08:09.001012 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:08:09.003280 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:08:09.016053 systemd[1]: Started sshd@11-64.23.220.168:22-139.178.89.65:57904.service - OpenSSH per-connection server daemon (139.178.89.65:57904). Aug 13 07:08:09.019595 systemd-logind[1447]: Removed session 11. Aug 13 07:08:09.114602 sshd[5818]: Accepted publickey for core from 139.178.89.65 port 57904 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:09.116711 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:09.122912 systemd-logind[1447]: New session 12 of user core. Aug 13 07:08:09.129761 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:08:09.283273 sshd[5818]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:09.288284 systemd[1]: sshd@11-64.23.220.168:22-139.178.89.65:57904.service: Deactivated successfully. Aug 13 07:08:09.292356 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:08:09.293557 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:08:09.294913 systemd-logind[1447]: Removed session 12. Aug 13 07:08:14.303129 systemd[1]: Started sshd@12-64.23.220.168:22-139.178.89.65:57914.service - OpenSSH per-connection server daemon (139.178.89.65:57914). Aug 13 07:08:14.483470 sshd[5859]: Accepted publickey for core from 139.178.89.65 port 57914 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:14.485846 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:14.493789 systemd-logind[1447]: New session 13 of user core. Aug 13 07:08:14.499892 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:08:14.915067 systemd[1]: run-containerd-runc-k8s.io-1f0cc89dbb58ee0dec2a6b3154e8367ec2997c5cb7d7c85a4bc5d98eff460e9d-runc.eoPV4Q.mount: Deactivated successfully. Aug 13 07:08:15.014163 sshd[5859]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:15.021500 systemd[1]: sshd@12-64.23.220.168:22-139.178.89.65:57914.service: Deactivated successfully. Aug 13 07:08:15.026369 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:08:15.028869 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:08:15.030677 systemd-logind[1447]: Removed session 13. Aug 13 07:08:15.204471 kubelet[2496]: E0813 07:08:15.204253 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:16.090349 kubelet[2496]: E0813 07:08:16.090221 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:20.039999 systemd[1]: Started sshd@13-64.23.220.168:22-139.178.89.65:45510.service - OpenSSH per-connection server daemon (139.178.89.65:45510). Aug 13 07:08:20.121855 sshd[5904]: Accepted publickey for core from 139.178.89.65 port 45510 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:20.124250 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:20.131678 systemd-logind[1447]: New session 14 of user core. Aug 13 07:08:20.137782 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:08:20.407031 sshd[5904]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:20.412770 systemd[1]: sshd@13-64.23.220.168:22-139.178.89.65:45510.service: Deactivated successfully. Aug 13 07:08:20.416129 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:08:20.418333 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:08:20.419866 systemd-logind[1447]: Removed session 14. Aug 13 07:08:25.426878 systemd[1]: Started sshd@14-64.23.220.168:22-139.178.89.65:45516.service - OpenSSH per-connection server daemon (139.178.89.65:45516). Aug 13 07:08:25.564432 sshd[5918]: Accepted publickey for core from 139.178.89.65 port 45516 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:25.567813 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:25.575052 systemd-logind[1447]: New session 15 of user core. Aug 13 07:08:25.578787 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:08:26.106955 sshd[5918]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:26.112302 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:08:26.113797 systemd[1]: sshd@14-64.23.220.168:22-139.178.89.65:45516.service: Deactivated successfully. Aug 13 07:08:26.117297 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:08:26.120829 systemd-logind[1447]: Removed session 15. Aug 13 07:08:29.071925 kubelet[2496]: E0813 07:08:29.071769 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:31.129102 systemd[1]: Started sshd@15-64.23.220.168:22-139.178.89.65:60020.service - OpenSSH per-connection server daemon (139.178.89.65:60020). Aug 13 07:08:31.209001 sshd[5953]: Accepted publickey for core from 139.178.89.65 port 60020 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:31.210092 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:31.217961 systemd-logind[1447]: New session 16 of user core. Aug 13 07:08:31.221863 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:08:31.548040 sshd[5953]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:31.559797 systemd[1]: sshd@15-64.23.220.168:22-139.178.89.65:60020.service: Deactivated successfully. Aug 13 07:08:31.563364 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:08:31.567329 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:08:31.575191 systemd[1]: Started sshd@16-64.23.220.168:22-139.178.89.65:60022.service - OpenSSH per-connection server daemon (139.178.89.65:60022). Aug 13 07:08:31.578314 systemd-logind[1447]: Removed session 16. Aug 13 07:08:31.639239 sshd[5966]: Accepted publickey for core from 139.178.89.65 port 60022 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:31.643087 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:31.652453 systemd-logind[1447]: New session 17 of user core. Aug 13 07:08:31.657845 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:08:32.156605 sshd[5966]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:32.167997 systemd[1]: sshd@16-64.23.220.168:22-139.178.89.65:60022.service: Deactivated successfully. Aug 13 07:08:32.172241 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:08:32.175481 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:08:32.186974 systemd[1]: Started sshd@17-64.23.220.168:22-139.178.89.65:60038.service - OpenSSH per-connection server daemon (139.178.89.65:60038). Aug 13 07:08:32.191188 systemd-logind[1447]: Removed session 17. Aug 13 07:08:32.285162 sshd[5977]: Accepted publickey for core from 139.178.89.65 port 60038 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:32.288079 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:32.295153 systemd-logind[1447]: New session 18 of user core. Aug 13 07:08:32.301012 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:08:33.364460 sshd[5977]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:33.377749 systemd[1]: sshd@17-64.23.220.168:22-139.178.89.65:60038.service: Deactivated successfully. Aug 13 07:08:33.385065 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:08:33.389134 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:08:33.401033 systemd[1]: Started sshd@18-64.23.220.168:22-139.178.89.65:60042.service - OpenSSH per-connection server daemon (139.178.89.65:60042). Aug 13 07:08:33.405033 systemd-logind[1447]: Removed session 18. Aug 13 07:08:33.518188 sshd[5995]: Accepted publickey for core from 139.178.89.65 port 60042 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:33.521028 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:33.529012 systemd-logind[1447]: New session 19 of user core. Aug 13 07:08:33.536969 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:08:34.068214 kubelet[2496]: E0813 07:08:34.068127 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:34.677897 sshd[5995]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:34.701989 systemd[1]: Started sshd@19-64.23.220.168:22-139.178.89.65:60046.service - OpenSSH per-connection server daemon (139.178.89.65:60046). Aug 13 07:08:34.706489 systemd[1]: sshd@18-64.23.220.168:22-139.178.89.65:60042.service: Deactivated successfully. Aug 13 07:08:34.718109 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:08:34.727284 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:08:34.737364 systemd-logind[1447]: Removed session 19. Aug 13 07:08:34.831110 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 60046 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:34.836035 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:34.846624 systemd-logind[1447]: New session 20 of user core. Aug 13 07:08:34.859899 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:08:35.107174 sshd[6014]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:35.118767 systemd[1]: sshd@19-64.23.220.168:22-139.178.89.65:60046.service: Deactivated successfully. Aug 13 07:08:35.125662 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:08:35.127395 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:08:35.130940 systemd-logind[1447]: Removed session 20. Aug 13 07:08:40.128089 systemd[1]: Started sshd@20-64.23.220.168:22-139.178.89.65:34044.service - OpenSSH per-connection server daemon (139.178.89.65:34044). Aug 13 07:08:40.253128 sshd[6054]: Accepted publickey for core from 139.178.89.65 port 34044 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:40.255997 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:40.263939 systemd-logind[1447]: New session 21 of user core. Aug 13 07:08:40.275112 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:08:40.770343 sshd[6054]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:40.775539 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:08:40.775816 systemd[1]: sshd@20-64.23.220.168:22-139.178.89.65:34044.service: Deactivated successfully. Aug 13 07:08:40.781590 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:08:40.786320 systemd-logind[1447]: Removed session 21. Aug 13 07:08:45.791988 systemd[1]: Started sshd@21-64.23.220.168:22-139.178.89.65:34050.service - OpenSSH per-connection server daemon (139.178.89.65:34050). Aug 13 07:08:45.961556 sshd[6090]: Accepted publickey for core from 139.178.89.65 port 34050 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:45.964470 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:45.976932 systemd-logind[1447]: New session 22 of user core. Aug 13 07:08:45.983775 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:08:46.633704 sshd[6090]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:46.645735 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:08:46.645958 systemd[1]: sshd@21-64.23.220.168:22-139.178.89.65:34050.service: Deactivated successfully. Aug 13 07:08:46.649784 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:08:46.651387 systemd-logind[1447]: Removed session 22. Aug 13 07:08:48.701993 systemd[1]: run-containerd-runc-k8s.io-a818f81f5cd11b23bec6c92c577e85bc47f63e723192f119263b03014f0e1301-runc.byKZXg.mount: Deactivated successfully. Aug 13 07:08:51.656221 systemd[1]: Started sshd@22-64.23.220.168:22-139.178.89.65:53558.service - OpenSSH per-connection server daemon (139.178.89.65:53558). Aug 13 07:08:51.726422 sshd[6123]: Accepted publickey for core from 139.178.89.65 port 53558 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:51.728580 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:51.737686 systemd-logind[1447]: New session 23 of user core. Aug 13 07:08:51.745872 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:08:51.995586 sshd[6123]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:52.005538 systemd[1]: sshd@22-64.23.220.168:22-139.178.89.65:53558.service: Deactivated successfully. Aug 13 07:08:52.012690 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:08:52.015701 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:08:52.018427 systemd-logind[1447]: Removed session 23. Aug 13 07:08:57.015891 systemd[1]: Started sshd@23-64.23.220.168:22-139.178.89.65:53566.service - OpenSSH per-connection server daemon (139.178.89.65:53566). Aug 13 07:08:57.116639 sshd[6157]: Accepted publickey for core from 139.178.89.65 port 53566 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:57.120344 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:57.127303 systemd-logind[1447]: New session 24 of user core. Aug 13 07:08:57.132756 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:08:57.836363 sshd[6157]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:57.843788 systemd[1]: sshd@23-64.23.220.168:22-139.178.89.65:53566.service: Deactivated successfully. Aug 13 07:08:57.848050 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:08:57.850244 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:08:57.853153 systemd-logind[1447]: Removed session 24. Aug 13 07:09:00.096758 kubelet[2496]: E0813 07:09:00.088992 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:02.863043 systemd[1]: Started sshd@24-64.23.220.168:22-139.178.89.65:49904.service - OpenSSH per-connection server daemon (139.178.89.65:49904). Aug 13 07:09:02.987819 sshd[6172]: Accepted publickey for core from 139.178.89.65 port 49904 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:02.993296 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:03.006297 systemd-logind[1447]: New session 25 of user core. Aug 13 07:09:03.014455 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:09:03.523985 sshd[6172]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:03.533134 systemd[1]: sshd@24-64.23.220.168:22-139.178.89.65:49904.service: Deactivated successfully. Aug 13 07:09:03.537900 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:09:03.541147 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:09:03.543900 systemd-logind[1447]: Removed session 25. Aug 13 07:09:08.544459 systemd[1]: Started sshd@25-64.23.220.168:22-139.178.89.65:49918.service - OpenSSH per-connection server daemon (139.178.89.65:49918). Aug 13 07:09:08.619565 sshd[6206]: Accepted publickey for core from 139.178.89.65 port 49918 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:08.621962 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:08.630668 systemd-logind[1447]: New session 26 of user core. Aug 13 07:09:08.635767 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:09:08.890735 sshd[6206]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:08.898893 systemd[1]: sshd@25-64.23.220.168:22-139.178.89.65:49918.service: Deactivated successfully. Aug 13 07:09:08.904035 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:09:08.906587 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:09:08.908405 systemd-logind[1447]: Removed session 26.