May 17 00:20:23.985281 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:20:23.985311 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:23.985324 kernel: BIOS-provided physical RAM map: May 17 00:20:23.985331 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:20:23.985338 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:20:23.985344 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:20:23.985354 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:20:23.985365 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:20:23.985372 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:20:23.985382 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:20:23.985389 kernel: NX (Execute Disable) protection: active May 17 00:20:23.985396 kernel: APIC: Static calls initialized May 17 00:20:23.985408 kernel: SMBIOS 2.8 present. May 17 00:20:23.985416 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:20:23.985425 kernel: Hypervisor detected: KVM May 17 00:20:23.985436 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:20:23.985446 kernel: kvm-clock: using sched offset of 3252574260 cycles May 17 00:20:23.985456 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:20:23.985467 kernel: tsc: Detected 2494.138 MHz processor May 17 00:20:23.985476 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:20:23.985484 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:20:23.985492 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:20:23.985500 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:20:23.985507 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:20:23.986864 kernel: ACPI: Early table checksum verification disabled May 17 00:20:23.986886 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:20:23.986899 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986911 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986924 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986936 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:20:23.986949 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986961 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986974 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.986996 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:23.987008 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:20:23.987020 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:20:23.987032 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:20:23.987040 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:20:23.987048 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:20:23.987056 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:20:23.987072 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:20:23.987081 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:20:23.987090 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:20:23.987098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:20:23.987106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:20:23.987125 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:20:23.987134 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:20:23.987146 kernel: Zone ranges: May 17 00:20:23.987157 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:20:23.987165 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:20:23.987173 kernel: Normal empty May 17 00:20:23.987182 kernel: Movable zone start for each node May 17 00:20:23.987193 kernel: Early memory node ranges May 17 00:20:23.987205 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:20:23.987218 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:20:23.987232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:20:23.987246 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:20:23.987254 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:20:23.987266 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:20:23.987275 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:20:23.987285 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:20:23.987294 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:20:23.987303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:20:23.987311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:20:23.987319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:20:23.987331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:20:23.987339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:20:23.987347 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:20:23.987372 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:20:23.987383 kernel: TSC deadline timer available May 17 00:20:23.987393 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:20:23.987402 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:20:23.987410 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:20:23.987422 kernel: Booting paravirtualized kernel on KVM May 17 00:20:23.987431 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:20:23.987444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:20:23.987457 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:20:23.987465 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:20:23.987474 kernel: pcpu-alloc: [0] 0 1 May 17 00:20:23.987482 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:20:23.987492 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:23.987510 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:20:23.987537 kernel: random: crng init done May 17 00:20:23.987555 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:20:23.987569 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:20:23.987585 kernel: Fallback order for Node 0: 0 May 17 00:20:23.987596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:20:23.987604 kernel: Policy zone: DMA32 May 17 00:20:23.987612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:20:23.987621 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125148K reserved, 0K cma-reserved) May 17 00:20:23.987634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:20:23.987653 kernel: Kernel/User page tables isolation: enabled May 17 00:20:23.987669 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:20:23.987684 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:20:23.987694 kernel: Dynamic Preempt: voluntary May 17 00:20:23.987703 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:20:23.987717 kernel: rcu: RCU event tracing is enabled. May 17 00:20:23.987730 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:20:23.987744 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:20:23.987759 kernel: Rude variant of Tasks RCU enabled. May 17 00:20:23.987774 kernel: Tracing variant of Tasks RCU enabled. May 17 00:20:23.987792 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:20:23.987804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:20:23.987816 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:20:23.987828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:20:23.987844 kernel: Console: colour VGA+ 80x25 May 17 00:20:23.987857 kernel: printk: console [tty0] enabled May 17 00:20:23.987869 kernel: printk: console [ttyS0] enabled May 17 00:20:23.987882 kernel: ACPI: Core revision 20230628 May 17 00:20:23.987895 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:20:23.987912 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:20:23.987925 kernel: x2apic enabled May 17 00:20:23.987937 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:20:23.987949 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:20:23.987961 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 17 00:20:23.987974 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) May 17 00:20:23.987988 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:20:23.988002 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:20:23.988030 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:20:23.988043 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:20:23.988058 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:20:23.988073 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:20:23.988086 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:20:23.988100 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:20:23.988112 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:20:23.988126 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:20:23.988148 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:20:23.988169 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:20:23.988184 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:20:23.988202 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:20:23.988218 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:20:23.988235 kernel: Freeing SMP alternatives memory: 32K May 17 00:20:23.988251 kernel: pid_max: default: 32768 minimum: 301 May 17 00:20:23.988269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:20:23.988285 kernel: landlock: Up and running. May 17 00:20:23.988307 kernel: SELinux: Initializing. May 17 00:20:23.988318 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:20:23.988328 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:20:23.988337 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:20:23.988347 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:23.988356 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:23.988366 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:23.988376 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:20:23.988390 kernel: signal: max sigframe size: 1776 May 17 00:20:23.988404 kernel: rcu: Hierarchical SRCU implementation. May 17 00:20:23.988419 kernel: rcu: Max phase no-delay instances is 400. May 17 00:20:23.988433 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:20:23.988449 kernel: smp: Bringing up secondary CPUs ... May 17 00:20:23.988464 kernel: smpboot: x86: Booting SMP configuration: May 17 00:20:23.988481 kernel: .... node #0, CPUs: #1 May 17 00:20:23.988498 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:20:23.989223 kernel: smpboot: Max logical packages: 1 May 17 00:20:23.989269 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) May 17 00:20:23.989294 kernel: devtmpfs: initialized May 17 00:20:23.989309 kernel: x86/mm: Memory block size: 128MB May 17 00:20:23.989323 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:20:23.989336 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:20:23.989349 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:20:23.989364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:20:23.989378 kernel: audit: initializing netlink subsys (disabled) May 17 00:20:23.989392 kernel: audit: type=2000 audit(1747441222.714:1): state=initialized audit_enabled=0 res=1 May 17 00:20:23.989407 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:20:23.989427 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:20:23.989442 kernel: cpuidle: using governor menu May 17 00:20:23.989459 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:20:23.989476 kernel: dca service started, version 1.12.1 May 17 00:20:23.989492 kernel: PCI: Using configuration type 1 for base access May 17 00:20:23.989509 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:20:23.989548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:20:23.989564 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:20:23.989582 kernel: ACPI: Added _OSI(Module Device) May 17 00:20:23.989601 kernel: ACPI: Added _OSI(Processor Device) May 17 00:20:23.989616 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:20:23.989633 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:20:23.989650 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:20:23.989667 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:20:23.989684 kernel: ACPI: Interpreter enabled May 17 00:20:23.989700 kernel: ACPI: PM: (supports S0 S5) May 17 00:20:23.989717 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:20:23.989733 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:20:23.989755 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:20:23.989771 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:20:23.989786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:20:23.990037 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:20:23.990166 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:20:23.990276 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:20:23.990288 kernel: acpiphp: Slot [3] registered May 17 00:20:23.990302 kernel: acpiphp: Slot [4] registered May 17 00:20:23.990312 kernel: acpiphp: Slot [5] registered May 17 00:20:23.990321 kernel: acpiphp: Slot [6] registered May 17 00:20:23.990330 kernel: acpiphp: Slot [7] registered May 17 00:20:23.990339 kernel: acpiphp: Slot [8] registered May 17 00:20:23.990348 kernel: acpiphp: Slot [9] registered May 17 00:20:23.990357 kernel: acpiphp: Slot [10] registered May 17 00:20:23.990366 kernel: acpiphp: Slot [11] registered May 17 00:20:23.990375 kernel: acpiphp: Slot [12] registered May 17 00:20:23.990387 kernel: acpiphp: Slot [13] registered May 17 00:20:23.990396 kernel: acpiphp: Slot [14] registered May 17 00:20:23.990405 kernel: acpiphp: Slot [15] registered May 17 00:20:23.990414 kernel: acpiphp: Slot [16] registered May 17 00:20:23.990423 kernel: acpiphp: Slot [17] registered May 17 00:20:23.990432 kernel: acpiphp: Slot [18] registered May 17 00:20:23.990441 kernel: acpiphp: Slot [19] registered May 17 00:20:23.990451 kernel: acpiphp: Slot [20] registered May 17 00:20:23.990466 kernel: acpiphp: Slot [21] registered May 17 00:20:23.990480 kernel: acpiphp: Slot [22] registered May 17 00:20:23.990496 kernel: acpiphp: Slot [23] registered May 17 00:20:23.990512 kernel: acpiphp: Slot [24] registered May 17 00:20:23.990559 kernel: acpiphp: Slot [25] registered May 17 00:20:23.990575 kernel: acpiphp: Slot [26] registered May 17 00:20:23.990585 kernel: acpiphp: Slot [27] registered May 17 00:20:23.990594 kernel: acpiphp: Slot [28] registered May 17 00:20:23.990603 kernel: acpiphp: Slot [29] registered May 17 00:20:23.990612 kernel: acpiphp: Slot [30] registered May 17 00:20:23.990622 kernel: acpiphp: Slot [31] registered May 17 00:20:23.990636 kernel: PCI host bridge to bus 0000:00 May 17 00:20:23.990758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:20:23.990847 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:20:23.990935 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:20:23.991018 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:20:23.991119 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:20:23.991219 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:20:23.991371 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:20:23.991642 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:20:23.991798 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:20:23.991899 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:20:23.991996 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:20:23.992092 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:20:23.992187 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:20:23.992291 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:20:23.992400 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:20:23.992497 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:20:23.992634 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:20:23.992731 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:20:23.992840 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:20:23.993014 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:20:23.993133 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:20:23.993240 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:20:23.993395 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:20:23.993578 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:20:23.993737 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:20:23.993871 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:20:23.993981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:20:23.994103 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:20:23.994217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:20:23.994323 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:20:23.994432 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:20:23.994552 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:20:23.994666 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:20:23.994801 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:20:23.994899 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:20:23.995020 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:20:23.995142 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:20:23.995284 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:20:23.995500 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:20:23.996492 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:20:23.996660 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:20:23.996847 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:20:23.997005 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:20:23.997207 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:20:23.997403 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:20:23.997675 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:20:23.997809 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:20:23.997906 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:20:23.997918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:20:23.997928 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:20:23.997937 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:20:23.997947 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:20:23.997956 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:20:23.997969 kernel: iommu: Default domain type: Translated May 17 00:20:23.997978 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:20:23.997987 kernel: PCI: Using ACPI for IRQ routing May 17 00:20:23.997996 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:20:23.998005 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:20:23.998014 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:20:23.998120 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:20:23.998243 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:20:23.998376 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:20:23.998400 kernel: vgaarb: loaded May 17 00:20:23.998414 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:20:23.998428 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:20:23.998442 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:20:23.998458 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:20:23.998475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:20:23.998490 kernel: pnp: PnP ACPI init May 17 00:20:23.998503 kernel: pnp: PnP ACPI: found 4 devices May 17 00:20:23.998531 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:20:23.998551 kernel: NET: Registered PF_INET protocol family May 17 00:20:23.998565 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:20:23.998580 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:20:23.998595 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:20:23.998609 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:20:23.998625 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:20:23.998641 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:20:23.998651 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:20:23.998664 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:20:23.998686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:20:23.998703 kernel: NET: Registered PF_XDP protocol family May 17 00:20:23.998841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:20:23.998932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:20:23.999019 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:20:23.999135 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:20:23.999279 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:20:23.999466 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:20:23.999695 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:20:23.999714 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:20:23.999819 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 34858 usecs May 17 00:20:23.999844 kernel: PCI: CLS 0 bytes, default 64 May 17 00:20:23.999864 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:20:23.999874 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 17 00:20:23.999883 kernel: Initialise system trusted keyrings May 17 00:20:23.999893 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:20:23.999908 kernel: Key type asymmetric registered May 17 00:20:23.999918 kernel: Asymmetric key parser 'x509' registered May 17 00:20:23.999927 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:20:23.999937 kernel: io scheduler mq-deadline registered May 17 00:20:23.999946 kernel: io scheduler kyber registered May 17 00:20:23.999955 kernel: io scheduler bfq registered May 17 00:20:23.999964 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:20:23.999973 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:20:23.999988 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:20:23.999997 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:20:24.000009 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:20:24.000019 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:20:24.000028 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:20:24.000037 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:20:24.000047 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:20:24.000056 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:20:24.000200 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:20:24.000308 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:20:24.000422 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:20:23 UTC (1747441223) May 17 00:20:24.000616 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:20:24.000643 kernel: intel_pstate: CPU model not supported May 17 00:20:24.000661 kernel: NET: Registered PF_INET6 protocol family May 17 00:20:24.000676 kernel: Segment Routing with IPv6 May 17 00:20:24.000688 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:20:24.000701 kernel: NET: Registered PF_PACKET protocol family May 17 00:20:24.000717 kernel: Key type dns_resolver registered May 17 00:20:24.000740 kernel: IPI shorthand broadcast: enabled May 17 00:20:24.000756 kernel: sched_clock: Marking stable (948007535, 121829615)->(1170092138, -100254988) May 17 00:20:24.000773 kernel: registered taskstats version 1 May 17 00:20:24.000789 kernel: Loading compiled-in X.509 certificates May 17 00:20:24.000807 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:20:24.000820 kernel: Key type .fscrypt registered May 17 00:20:24.000834 kernel: Key type fscrypt-provisioning registered May 17 00:20:24.000845 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:20:24.000854 kernel: ima: Allocated hash algorithm: sha1 May 17 00:20:24.000868 kernel: ima: No architecture policies found May 17 00:20:24.000876 kernel: clk: Disabling unused clocks May 17 00:20:24.000885 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:20:24.000895 kernel: Write protecting the kernel read-only data: 36864k May 17 00:20:24.000904 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:20:24.000934 kernel: Run /init as init process May 17 00:20:24.000946 kernel: with arguments: May 17 00:20:24.000955 kernel: /init May 17 00:20:24.000965 kernel: with environment: May 17 00:20:24.000977 kernel: HOME=/ May 17 00:20:24.000986 kernel: TERM=linux May 17 00:20:24.000996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:20:24.001009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:24.001022 systemd[1]: Detected virtualization kvm. May 17 00:20:24.001032 systemd[1]: Detected architecture x86-64. May 17 00:20:24.001041 systemd[1]: Running in initrd. May 17 00:20:24.001051 systemd[1]: No hostname configured, using default hostname. May 17 00:20:24.001064 systemd[1]: Hostname set to . May 17 00:20:24.001074 systemd[1]: Initializing machine ID from VM UUID. May 17 00:20:24.001084 systemd[1]: Queued start job for default target initrd.target. May 17 00:20:24.001094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:24.001104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:24.001116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:20:24.001126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:24.001139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:20:24.001149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:20:24.001161 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:20:24.001171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:20:24.001181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:24.001191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:24.001201 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:24.001214 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:24.001224 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:24.001235 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:24.001247 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:24.001257 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:24.001268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:20:24.001282 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:20:24.001292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:24.001302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:24.001312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:24.001322 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:24.001338 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:20:24.001356 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:24.001366 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:20:24.001380 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:20:24.001390 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:24.001400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:24.001410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:24.001428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:20:24.001477 systemd-journald[183]: Collecting audit messages is disabled. May 17 00:20:24.001606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:24.001627 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:20:24.001642 systemd-journald[183]: Journal started May 17 00:20:24.001680 systemd-journald[183]: Runtime Journal (/run/log/journal/c17667bc508b44cbb5c016e03fcff23c) is 4.9M, max 39.3M, 34.4M free. May 17 00:20:23.995918 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:20:24.014556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:20:24.017578 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:24.037550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:20:24.039687 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:20:24.059036 kernel: Bridge firewalling registered May 17 00:20:24.058003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:24.058629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:24.064184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:24.071851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:24.077871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:24.081822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:24.083138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:24.102894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:24.111635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:24.116840 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:20:24.118634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:24.120062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:24.134851 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:24.145889 dracut-cmdline[214]: dracut-dracut-053 May 17 00:20:24.152549 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:24.177362 systemd-resolved[218]: Positive Trust Anchors: May 17 00:20:24.177385 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:24.177435 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:24.184124 systemd-resolved[218]: Defaulting to hostname 'linux'. May 17 00:20:24.186680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:24.187211 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:24.254647 kernel: SCSI subsystem initialized May 17 00:20:24.269570 kernel: Loading iSCSI transport class v2.0-870. May 17 00:20:24.285582 kernel: iscsi: registered transport (tcp) May 17 00:20:24.310629 kernel: iscsi: registered transport (qla4xxx) May 17 00:20:24.310749 kernel: QLogic iSCSI HBA Driver May 17 00:20:24.372029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:20:24.382906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:20:24.415562 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:20:24.415686 kernel: device-mapper: uevent: version 1.0.3 May 17 00:20:24.417671 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:20:24.468585 kernel: raid6: avx2x4 gen() 16548 MB/s May 17 00:20:24.485571 kernel: raid6: avx2x2 gen() 15203 MB/s May 17 00:20:24.502653 kernel: raid6: avx2x1 gen() 12559 MB/s May 17 00:20:24.502763 kernel: raid6: using algorithm avx2x4 gen() 16548 MB/s May 17 00:20:24.520974 kernel: raid6: .... xor() 4193 MB/s, rmw enabled May 17 00:20:24.521106 kernel: raid6: using avx2x2 recovery algorithm May 17 00:20:24.550571 kernel: xor: automatically using best checksumming function avx May 17 00:20:24.782582 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:20:24.799104 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:24.806927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:24.835799 systemd-udevd[401]: Using default interface naming scheme 'v255'. May 17 00:20:24.841318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:24.850779 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:20:24.883558 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation May 17 00:20:24.931097 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:24.937866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:25.004283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:25.016861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:20:25.032883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:20:25.034818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:25.036709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:25.038410 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:25.044870 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:20:25.077456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:25.103562 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 17 00:20:25.108251 kernel: scsi host0: Virtio SCSI HBA May 17 00:20:25.114585 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:20:25.123547 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:20:25.123649 kernel: GPT:9289727 != 125829119 May 17 00:20:25.123672 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:20:25.123705 kernel: GPT:9289727 != 125829119 May 17 00:20:25.123718 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:20:25.124700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:20:25.137645 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:20:25.139557 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 17 00:20:25.141607 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 17 00:20:25.180796 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:20:25.180890 kernel: AES CTR mode by8 optimization enabled May 17 00:20:25.185545 kernel: libata version 3.00 loaded. May 17 00:20:25.190676 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:20:25.200613 kernel: scsi host1: ata_piix May 17 00:20:25.201165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:25.212247 kernel: scsi host2: ata_piix May 17 00:20:25.212457 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:20:25.212473 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:20:25.201297 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:25.207993 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:25.208348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:25.208591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:25.208995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:25.218999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:25.224545 kernel: ACPI: bus type USB registered May 17 00:20:25.224614 kernel: usbcore: registered new interface driver usbfs May 17 00:20:25.226555 kernel: usbcore: registered new interface driver hub May 17 00:20:25.227839 kernel: usbcore: registered new device driver usb May 17 00:20:25.233559 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) May 17 00:20:25.262544 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (446) May 17 00:20:25.272884 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:20:25.298673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:25.306210 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:20:25.311805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:20:25.316797 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:20:25.317369 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:20:25.329887 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:20:25.332215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:25.339265 disk-uuid[531]: Primary Header is updated. May 17 00:20:25.339265 disk-uuid[531]: Secondary Entries is updated. May 17 00:20:25.339265 disk-uuid[531]: Secondary Header is updated. May 17 00:20:25.349595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:20:25.354568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:20:25.372560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:20:25.374554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:25.412118 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:20:25.413150 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:20:25.413538 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:20:25.413770 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 17 00:20:25.415297 kernel: hub 1-0:1.0: USB hub found May 17 00:20:25.415790 kernel: hub 1-0:1.0: 2 ports detected May 17 00:20:26.365570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:20:26.366074 disk-uuid[532]: The operation has completed successfully. May 17 00:20:26.415037 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:20:26.415184 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:20:26.430880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:20:26.444296 sh[563]: Success May 17 00:20:26.461722 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:20:26.538880 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:20:26.553768 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:20:26.556675 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:20:26.585093 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:20:26.585191 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:26.585214 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:20:26.586707 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:20:26.587586 kernel: BTRFS info (device dm-0): using free space tree May 17 00:20:26.597264 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:20:26.598767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:20:26.607877 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:20:26.611909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:20:26.626342 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:26.626432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:26.626461 kernel: BTRFS info (device vda6): using free space tree May 17 00:20:26.630543 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:20:26.644617 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:20:26.646294 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:26.654058 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:20:26.662944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:20:26.781426 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:26.795895 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:26.816436 ignition[647]: Ignition 2.19.0 May 17 00:20:26.816460 ignition[647]: Stage: fetch-offline May 17 00:20:26.818099 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:26.816562 ignition[647]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:26.816574 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:26.816715 ignition[647]: parsed url from cmdline: "" May 17 00:20:26.816722 ignition[647]: no config URL provided May 17 00:20:26.816731 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:26.816742 ignition[647]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:26.816749 ignition[647]: failed to fetch config: resource requires networking May 17 00:20:26.816996 ignition[647]: Ignition finished successfully May 17 00:20:26.824505 systemd-networkd[751]: lo: Link UP May 17 00:20:26.824616 systemd-networkd[751]: lo: Gained carrier May 17 00:20:26.827453 systemd-networkd[751]: Enumeration completed May 17 00:20:26.827677 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:26.828018 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:20:26.828022 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:20:26.829017 systemd[1]: Reached target network.target - Network. May 17 00:20:26.830363 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:26.830368 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:26.831919 systemd-networkd[751]: eth0: Link UP May 17 00:20:26.831925 systemd-networkd[751]: eth0: Gained carrier May 17 00:20:26.831938 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:20:26.835865 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:20:26.836937 systemd-networkd[751]: eth1: Link UP May 17 00:20:26.836942 systemd-networkd[751]: eth1: Gained carrier May 17 00:20:26.836957 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:26.854332 systemd-networkd[751]: eth0: DHCPv4 address 64.23.130.50/20, gateway 64.23.128.1 acquired from 169.254.169.253 May 17 00:20:26.859677 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 May 17 00:20:26.876687 ignition[755]: Ignition 2.19.0 May 17 00:20:26.877626 ignition[755]: Stage: fetch May 17 00:20:26.877914 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:26.877935 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:26.878059 ignition[755]: parsed url from cmdline: "" May 17 00:20:26.878063 ignition[755]: no config URL provided May 17 00:20:26.878069 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:26.878078 ignition[755]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:26.878103 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:20:26.894824 ignition[755]: GET result: OK May 17 00:20:26.894972 ignition[755]: parsing config with SHA512: f4dffaf3b74c8d9387cfc2f1f4f88c1c1edc358cb6dc723fcfe20d4d11b80f6ab5cec53d727bdf2f1f5fd623716160f87a8fdb0765e361726c490ef4af63e529 May 17 00:20:26.904173 unknown[755]: fetched base config from "system" May 17 00:20:26.904190 unknown[755]: fetched base config from "system" May 17 00:20:26.905547 ignition[755]: fetch: fetch complete May 17 00:20:26.904200 unknown[755]: fetched user config from "digitalocean" May 17 00:20:26.905564 ignition[755]: fetch: fetch passed May 17 00:20:26.905662 ignition[755]: Ignition finished successfully May 17 00:20:26.908132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:20:26.915894 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:20:26.943696 ignition[762]: Ignition 2.19.0 May 17 00:20:26.943709 ignition[762]: Stage: kargs May 17 00:20:26.944031 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:26.944051 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:26.947384 ignition[762]: kargs: kargs passed May 17 00:20:26.947481 ignition[762]: Ignition finished successfully May 17 00:20:26.950072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:20:26.955976 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:20:26.977934 ignition[768]: Ignition 2.19.0 May 17 00:20:26.977949 ignition[768]: Stage: disks May 17 00:20:26.978244 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:26.978261 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:26.980025 ignition[768]: disks: disks passed May 17 00:20:26.980116 ignition[768]: Ignition finished successfully May 17 00:20:26.981588 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:20:26.985410 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:20:26.986076 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:20:26.986882 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:26.987890 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:26.988598 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:26.993854 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:20:27.024414 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:20:27.028784 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:20:27.035051 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:20:27.175551 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:20:27.177153 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:20:27.178421 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:20:27.186776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:27.189696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:20:27.194997 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... May 17 00:20:27.203555 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) May 17 00:20:27.206404 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:27.206482 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:27.206496 kernel: BTRFS info (device vda6): using free space tree May 17 00:20:27.207957 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:20:27.213716 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:20:27.213870 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:20:27.213949 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:27.218384 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:20:27.223758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:27.231907 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:20:27.305839 coreos-metadata[787]: May 17 00:20:27.305 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:20:27.319793 coreos-metadata[787]: May 17 00:20:27.319 INFO Fetch successful May 17 00:20:27.321648 coreos-metadata[788]: May 17 00:20:27.321 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:20:27.325647 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:20:27.329900 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:20:27.330022 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. May 17 00:20:27.334002 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory May 17 00:20:27.337549 coreos-metadata[788]: May 17 00:20:27.337 INFO Fetch successful May 17 00:20:27.341885 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:20:27.343829 coreos-metadata[788]: May 17 00:20:27.343 INFO wrote hostname ci-4081.3.3-n-6deca81674 to /sysroot/etc/hostname May 17 00:20:27.346641 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:27.350921 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:20:27.467934 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:20:27.473773 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:20:27.476789 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:20:27.494553 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:27.521795 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:20:27.529104 ignition[905]: INFO : Ignition 2.19.0 May 17 00:20:27.530905 ignition[905]: INFO : Stage: mount May 17 00:20:27.530905 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:27.530905 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:27.532388 ignition[905]: INFO : mount: mount passed May 17 00:20:27.532388 ignition[905]: INFO : Ignition finished successfully May 17 00:20:27.533256 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:20:27.539734 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:20:27.583765 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:20:27.592856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:27.602578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) May 17 00:20:27.604748 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:27.604842 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:27.605608 kernel: BTRFS info (device vda6): using free space tree May 17 00:20:27.609547 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:20:27.612727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:27.639815 ignition[934]: INFO : Ignition 2.19.0 May 17 00:20:27.640930 ignition[934]: INFO : Stage: files May 17 00:20:27.642745 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:27.642745 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:27.642745 ignition[934]: DEBUG : files: compiled without relabeling support, skipping May 17 00:20:27.644711 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:20:27.644711 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:20:27.649098 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:20:27.649931 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:20:27.650399 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:20:27.650324 unknown[934]: wrote ssh authorized keys file for user: core May 17 00:20:27.652295 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:27.653101 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:20:28.475879 systemd-networkd[751]: eth1: Gained IPv6LL May 17 00:20:28.667977 systemd-networkd[751]: eth0: Gained IPv6LL May 17 00:20:28.702043 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:20:28.786695 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:28.786695 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:28.788747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:28.795740 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:28.795740 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:28.795740 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:20:29.480010 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:20:29.839893 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:29.839893 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:20:29.843686 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:29.843686 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:29.843686 ignition[934]: INFO : files: files passed May 17 00:20:29.843686 ignition[934]: INFO : Ignition finished successfully May 17 00:20:29.845156 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:20:29.854879 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:20:29.861038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:20:29.868222 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:20:29.868427 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:20:29.893247 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:29.893247 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:29.896857 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:29.899179 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:29.900961 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:20:29.905961 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:20:29.973283 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:20:29.973500 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:20:29.974645 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:20:29.975849 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:20:29.976461 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:20:29.989113 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:20:30.018915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:30.028107 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:20:30.063954 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:30.065446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:30.066282 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:20:30.067702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:20:30.067964 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:30.069807 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:20:30.071665 systemd[1]: Stopped target basic.target - Basic System. May 17 00:20:30.072603 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:20:30.073498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:30.074734 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:20:30.076092 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:20:30.077153 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:30.080734 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:20:30.081955 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:20:30.082871 systemd[1]: Stopped target swap.target - Swaps. May 17 00:20:30.084633 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:20:30.084873 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:30.086307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:30.087304 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:30.088178 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:20:30.088332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:30.089210 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:20:30.089432 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:20:30.094090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:20:30.094464 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:30.096470 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:20:30.096794 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:20:30.098170 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:20:30.098413 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:30.112731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:20:30.117678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:20:30.118395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:30.135460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:20:30.136071 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:20:30.136419 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:30.137301 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:20:30.137541 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:30.145112 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:20:30.145284 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:20:30.170001 ignition[987]: INFO : Ignition 2.19.0 May 17 00:20:30.172674 ignition[987]: INFO : Stage: umount May 17 00:20:30.172674 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:30.172674 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:20:30.192418 ignition[987]: INFO : umount: umount passed May 17 00:20:30.192418 ignition[987]: INFO : Ignition finished successfully May 17 00:20:30.184734 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:20:30.185998 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:20:30.186213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:20:30.195967 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:20:30.196199 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:20:30.214703 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:20:30.214818 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:20:30.216104 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:20:30.216204 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:20:30.219499 systemd[1]: Stopped target network.target - Network. May 17 00:20:30.220181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:20:30.220326 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:30.221333 systemd[1]: Stopped target paths.target - Path Units. May 17 00:20:30.222337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:20:30.222426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:30.223898 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:20:30.224352 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:20:30.225440 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:20:30.225556 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:30.227332 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:20:30.227727 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:30.234330 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:20:30.234477 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:20:30.235865 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:20:30.235975 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:20:30.236879 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:20:30.239693 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:20:30.240783 systemd-networkd[751]: eth1: DHCPv6 lease lost May 17 00:20:30.242205 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:20:30.242410 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:20:30.242614 systemd-networkd[751]: eth0: DHCPv6 lease lost May 17 00:20:30.244738 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:20:30.245109 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:20:30.249745 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:20:30.250062 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:20:30.253511 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:20:30.253710 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:30.254959 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:20:30.255065 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:20:30.267942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:20:30.269137 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:20:30.269232 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:30.270566 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:30.270758 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:30.271482 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:20:30.271695 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:20:30.273717 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:20:30.273983 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:30.275165 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:30.298436 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:20:30.298693 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:30.301089 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:20:30.301178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:20:30.303021 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:20:30.303101 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:30.305425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:20:30.305584 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:30.307096 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:20:30.307217 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:20:30.309033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:30.309151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:30.318053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:20:30.318994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:20:30.319132 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:30.320137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:30.320234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:30.321572 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:20:30.321782 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:20:30.340471 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:20:30.340732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:20:30.343284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:20:30.358932 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:20:30.372492 systemd[1]: Switching root. May 17 00:20:30.404144 systemd-journald[183]: Journal stopped May 17 00:20:31.963059 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:20:31.963217 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:20:31.963243 kernel: SELinux: policy capability open_perms=1 May 17 00:20:31.963265 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:20:31.963284 kernel: SELinux: policy capability always_check_network=0 May 17 00:20:31.963303 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:20:31.963323 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:20:31.963355 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:20:31.963386 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:20:31.963410 kernel: audit: type=1403 audit(1747441230.592:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:20:31.963430 systemd[1]: Successfully loaded SELinux policy in 52.380ms. May 17 00:20:31.963460 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.731ms. May 17 00:20:31.963484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:31.963505 systemd[1]: Detected virtualization kvm. May 17 00:20:31.965591 systemd[1]: Detected architecture x86-64. May 17 00:20:31.965630 systemd[1]: Detected first boot. May 17 00:20:31.965665 systemd[1]: Hostname set to . May 17 00:20:31.965695 systemd[1]: Initializing machine ID from VM UUID. May 17 00:20:31.965716 zram_generator::config[1030]: No configuration found. May 17 00:20:31.965740 systemd[1]: Populated /etc with preset unit settings. May 17 00:20:31.965760 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:20:31.965780 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:20:31.965825 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:20:31.965850 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:20:31.965874 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:20:31.965903 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:20:31.965943 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:20:31.965969 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:20:31.965991 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:20:31.966011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:20:31.966033 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:20:31.966054 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:31.966077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:31.966099 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:20:31.966128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:20:31.966154 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:20:31.966176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:31.966199 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:20:31.966224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:31.966244 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:20:31.966298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:20:31.966327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:20:31.966349 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:20:31.966374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:31.966398 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:31.966428 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:31.966454 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:31.966473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:20:31.966493 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:20:31.966553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:31.966592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:31.966612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:31.966631 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:20:31.966652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:20:31.966682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:20:31.966702 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:20:31.966722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:31.966744 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:20:31.966772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:20:31.966791 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:20:31.966812 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:20:31.966832 systemd[1]: Reached target machines.target - Containers. May 17 00:20:31.966852 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:20:31.966872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:31.966893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:31.966914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:20:31.966939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:31.966953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:31.966967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:31.966984 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:20:31.967004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:31.967025 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:20:31.967040 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:20:31.967056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:20:31.967070 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:20:31.967106 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:20:31.967120 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:31.967134 kernel: fuse: init (API version 7.39) May 17 00:20:31.967150 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:31.967164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:20:31.967177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:20:31.967190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:31.967204 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:20:31.967217 systemd[1]: Stopped verity-setup.service. May 17 00:20:31.967234 kernel: ACPI: bus type drm_connector registered May 17 00:20:31.967248 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:31.967262 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:20:31.967275 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:20:31.967290 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:20:31.967305 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:20:31.967322 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:20:31.967353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:20:31.967372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:31.967392 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:20:31.967422 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:20:31.967443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:31.967477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:31.967496 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:31.969563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:31.969617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:31.969637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:31.969652 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:20:31.969666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:20:31.969689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:31.969746 systemd-journald[1104]: Collecting audit messages is disabled. May 17 00:20:31.969776 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:20:31.969790 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:20:31.969803 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:20:31.969817 systemd-journald[1104]: Journal started May 17 00:20:31.969847 systemd-journald[1104]: Runtime Journal (/run/log/journal/c17667bc508b44cbb5c016e03fcff23c) is 4.9M, max 39.3M, 34.4M free. May 17 00:20:31.537210 systemd[1]: Queued start job for default target multi-user.target. May 17 00:20:31.974761 kernel: loop: module loaded May 17 00:20:31.974824 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:20:31.560855 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:20:31.561475 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:20:31.989390 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:20:31.989491 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:20:31.993678 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:32.000673 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:20:32.011657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:20:32.023587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:20:32.027568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:32.039563 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:20:32.044123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:32.060609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:20:32.089732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:32.089845 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:20:32.096614 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:32.100658 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:20:32.102170 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:32.102632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:32.104012 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:20:32.105880 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:20:32.107060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:20:32.158751 kernel: loop0: detected capacity change from 0 to 229808 May 17 00:20:32.158618 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:20:32.168212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:20:32.181208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:20:32.193163 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:20:32.194534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:32.213610 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:20:32.207907 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:20:32.255655 kernel: loop1: detected capacity change from 0 to 8 May 17 00:20:32.268234 systemd-journald[1104]: Time spent on flushing to /var/log/journal/c17667bc508b44cbb5c016e03fcff23c is 146.163ms for 992 entries. May 17 00:20:32.268234 systemd-journald[1104]: System Journal (/var/log/journal/c17667bc508b44cbb5c016e03fcff23c) is 8.0M, max 195.6M, 187.6M free. May 17 00:20:32.440972 systemd-journald[1104]: Received client request to flush runtime journal. May 17 00:20:32.441100 kernel: loop2: detected capacity change from 0 to 142488 May 17 00:20:32.441127 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:20:32.282831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:32.299104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:20:32.305638 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:20:32.351433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:32.361906 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:20:32.367943 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:20:32.374847 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:32.407082 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 17 00:20:32.407100 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 17 00:20:32.414902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:32.444105 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:20:32.456892 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:20:32.464753 kernel: loop4: detected capacity change from 0 to 229808 May 17 00:20:32.489693 kernel: loop5: detected capacity change from 0 to 8 May 17 00:20:32.494491 kernel: loop6: detected capacity change from 0 to 142488 May 17 00:20:32.523562 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:20:32.564486 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 17 00:20:32.565399 (sd-merge)[1177]: Merged extensions into '/usr'. May 17 00:20:32.580812 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:20:32.581269 systemd[1]: Reloading... May 17 00:20:32.816564 zram_generator::config[1203]: No configuration found. May 17 00:20:32.985202 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:20:33.085181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:33.149692 systemd[1]: Reloading finished in 565 ms. May 17 00:20:33.177921 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:20:33.182049 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:20:33.196950 systemd[1]: Starting ensure-sysext.service... May 17 00:20:33.208896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:33.234360 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... May 17 00:20:33.234590 systemd[1]: Reloading... May 17 00:20:33.288627 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:20:33.289205 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:20:33.290793 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:20:33.292915 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 17 00:20:33.293018 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 17 00:20:33.301383 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:33.301404 systemd-tmpfiles[1247]: Skipping /boot May 17 00:20:33.334356 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:33.334374 systemd-tmpfiles[1247]: Skipping /boot May 17 00:20:33.349562 zram_generator::config[1273]: No configuration found. May 17 00:20:33.601365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:33.661474 systemd[1]: Reloading finished in 426 ms. May 17 00:20:33.678285 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:20:33.679241 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:33.697830 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:33.702853 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:20:33.708058 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:20:33.718814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:33.722790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:33.729837 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:20:33.738151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.738388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:33.746163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:33.751105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:33.757498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:33.759954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:33.760127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.761374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:33.761965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:33.769471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.771861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:33.779389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:33.780111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:33.788591 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:20:33.789467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.795226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.795545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:33.800083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:33.800688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:33.800856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:33.810156 systemd[1]: Finished ensure-sysext.service. May 17 00:20:33.812214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:33.812381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:33.821821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:33.834945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:20:33.839187 augenrules[1349]: No rules May 17 00:20:33.844302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:20:33.845309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:33.845468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:33.846526 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:33.847305 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:33.847537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:33.870137 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:33.870411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:33.872962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:33.892608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:20:33.902822 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:20:33.920196 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:20:33.921111 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:20:33.926244 systemd-udevd[1330]: Using default interface naming scheme 'v255'. May 17 00:20:33.940067 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:20:33.947165 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:20:33.970856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:33.983746 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:34.095673 systemd-networkd[1370]: lo: Link UP May 17 00:20:34.095684 systemd-networkd[1370]: lo: Gained carrier May 17 00:20:34.096583 systemd-networkd[1370]: Enumeration completed May 17 00:20:34.096731 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:34.107921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:20:34.108948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:20:34.112310 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:20:34.117186 systemd-resolved[1324]: Positive Trust Anchors: May 17 00:20:34.118364 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:34.118420 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:34.126370 systemd-resolved[1324]: Using system hostname 'ci-4081.3.3-n-6deca81674'. May 17 00:20:34.129782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:34.130512 systemd[1]: Reached target network.target - Network. May 17 00:20:34.131075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:34.177304 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 17 00:20:34.178220 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:34.178374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:34.185823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:34.196810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:34.203571 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) May 17 00:20:34.205940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:34.207765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:34.207828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:20:34.207853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:34.210014 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:20:34.224571 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:20:34.229641 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 17 00:20:34.248112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:34.249593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:34.254443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:34.254777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:34.259953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:34.261946 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:34.263640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:34.271552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:20:34.291873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:20:34.292385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:34.320018 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-da:24:5c:66:2c:75.network. May 17 00:20:34.320972 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:20:34.324915 systemd-networkd[1370]: eth1: Link UP May 17 00:20:34.324925 systemd-networkd[1370]: eth1: Gained carrier May 17 00:20:34.332501 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-3a:60:74:17:7c:2c.network. May 17 00:20:34.334774 systemd-networkd[1370]: eth0: Link UP May 17 00:20:34.334784 systemd-networkd[1370]: eth0: Gained carrier May 17 00:20:34.339693 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. May 17 00:20:34.345573 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:20:34.356545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:20:34.363570 kernel: ACPI: button: Power Button [PWRF] May 17 00:20:34.367641 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:20:34.439742 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:20:34.444505 systemd-timesyncd[1350]: Contacted time server 216.66.48.42:123 (0.flatcar.pool.ntp.org). May 17 00:20:34.444595 systemd-timesyncd[1350]: Initial clock synchronization to Sat 2025-05-17 00:20:34.704671 UTC. May 17 00:20:34.489109 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 17 00:20:34.489197 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 17 00:20:34.489125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:34.503034 kernel: Console: switching to colour dummy device 80x25 May 17 00:20:34.503114 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:20:34.503137 kernel: [drm] features: -context_init May 17 00:20:34.504548 kernel: [drm] number of scanouts: 1 May 17 00:20:34.504629 kernel: [drm] number of cap sets: 0 May 17 00:20:34.505604 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 17 00:20:34.515802 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:20:34.516006 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:20:34.533575 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:20:34.545457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:34.545727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:34.555019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:34.658576 kernel: EDAC MC: Ver: 3.0.0 May 17 00:20:34.682822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:34.687135 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:20:34.692978 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:20:34.716845 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:34.746487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:20:34.748103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:34.750122 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:34.750682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:20:34.751024 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:20:34.751670 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:20:34.751912 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:20:34.752004 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:20:34.752086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:20:34.752121 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:34.752175 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:34.754127 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:20:34.757788 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:20:34.765909 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:20:34.769156 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:20:34.773044 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:20:34.775892 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:34.776514 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:34.777217 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:34.777267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:34.782087 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:34.784830 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:20:34.794895 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:20:34.801797 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:20:34.816774 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:20:34.821997 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:20:34.823704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:20:34.835579 jq[1436]: false May 17 00:20:34.832534 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:20:34.841784 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:20:34.847885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:20:34.858873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:20:34.872851 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:20:34.875348 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:20:34.876456 dbus-daemon[1435]: [system] SELinux support is enabled May 17 00:20:34.879646 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:20:34.885784 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:20:34.895754 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:20:34.899838 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:20:34.909657 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:20:34.919470 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:20:34.919975 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:20:34.927880 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:20:34.928643 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:20:34.928929 coreos-metadata[1434]: May 17 00:20:34.928 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:20:34.939731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:20:34.939786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:20:34.942126 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:20:34.947349 extend-filesystems[1437]: Found loop4 May 17 00:20:34.947349 extend-filesystems[1437]: Found loop5 May 17 00:20:34.947349 extend-filesystems[1437]: Found loop6 May 17 00:20:34.947349 extend-filesystems[1437]: Found loop7 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda May 17 00:20:34.947349 extend-filesystems[1437]: Found vda1 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda2 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda3 May 17 00:20:34.947349 extend-filesystems[1437]: Found usr May 17 00:20:34.947349 extend-filesystems[1437]: Found vda4 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda6 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda7 May 17 00:20:34.947349 extend-filesystems[1437]: Found vda9 May 17 00:20:34.947349 extend-filesystems[1437]: Checking size of /dev/vda9 May 17 00:20:34.942237 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 17 00:20:35.020031 update_engine[1447]: I20250517 00:20:34.981333 1447 main.cc:92] Flatcar Update Engine starting May 17 00:20:35.020031 update_engine[1447]: I20250517 00:20:34.997089 1447 update_check_scheduler.cc:74] Next update check in 2m36s May 17 00:20:35.020404 coreos-metadata[1434]: May 17 00:20:34.960 INFO Fetch successful May 17 00:20:34.942257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:20:34.979096 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:20:35.032103 extend-filesystems[1437]: Resized partition /dev/vda9 May 17 00:20:34.997655 systemd[1]: Started update-engine.service - Update Engine. May 17 00:20:35.035400 jq[1448]: true May 17 00:20:35.012231 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:20:35.012473 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:20:35.030870 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:20:35.047001 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) May 17 00:20:35.052445 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:20:35.059427 tar[1455]: linux-amd64/LICENSE May 17 00:20:35.059427 tar[1455]: linux-amd64/helm May 17 00:20:35.072191 jq[1468]: true May 17 00:20:35.153239 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:20:35.157954 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:20:35.180001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) May 17 00:20:35.273894 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:20:35.292115 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:20:35.292115 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:20:35.292115 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:20:35.312076 extend-filesystems[1437]: Resized filesystem in /dev/vda9 May 17 00:20:35.312076 extend-filesystems[1437]: Found vdb May 17 00:20:35.295211 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:20:35.295481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:20:35.370092 bash[1497]: Updated "/home/core/.ssh/authorized_keys" May 17 00:20:35.371891 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:20:35.383975 systemd[1]: Starting sshkeys.service... May 17 00:20:35.451752 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:20:35.460349 systemd-logind[1445]: New seat seat0. May 17 00:20:35.464088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:20:35.476188 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:20:35.476221 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:20:35.476797 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:20:35.516357 systemd-networkd[1370]: eth1: Gained IPv6LL May 17 00:20:35.528689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:20:35.530538 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:20:35.547017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:35.560270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:20:35.580289 coreos-metadata[1507]: May 17 00:20:35.579 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:20:35.583728 systemd-networkd[1370]: eth0: Gained IPv6LL May 17 00:20:35.594477 coreos-metadata[1507]: May 17 00:20:35.593 INFO Fetch successful May 17 00:20:35.620131 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:20:35.630301 unknown[1507]: wrote ssh authorized keys file for user: core May 17 00:20:35.698955 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" May 17 00:20:35.701508 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:20:35.710084 systemd[1]: Finished sshkeys.service. May 17 00:20:35.739922 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:20:35.816774 containerd[1459]: time="2025-05-17T00:20:35.815744630Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:20:35.968491 containerd[1459]: time="2025-05-17T00:20:35.968095144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.979951 containerd[1459]: time="2025-05-17T00:20:35.979879391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:35.980138 containerd[1459]: time="2025-05-17T00:20:35.980119080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:20:35.980217 containerd[1459]: time="2025-05-17T00:20:35.980202507Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:20:35.980531 containerd[1459]: time="2025-05-17T00:20:35.980498510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:20:35.981044 containerd[1459]: time="2025-05-17T00:20:35.980805669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.981044 containerd[1459]: time="2025-05-17T00:20:35.980928709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:35.981044 containerd[1459]: time="2025-05-17T00:20:35.980948973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.981482 containerd[1459]: time="2025-05-17T00:20:35.981451538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.983587754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.983643734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.983665428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.983836392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.984168175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.984398335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.984422855Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.984531326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:20:35.984686 containerd[1459]: time="2025-05-17T00:20:35.984628311Z" level=info msg="metadata content store policy set" policy=shared May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.995648618Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.995759144Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.995785108Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.996892408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.996934472Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:20:35.997579 containerd[1459]: time="2025-05-17T00:20:35.997231370Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:20:35.999000 containerd[1459]: time="2025-05-17T00:20:35.998947931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:20:35.999391 containerd[1459]: time="2025-05-17T00:20:35.999363179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:35.999596401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001586179Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001620846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001643945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001665241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001688905Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001711447Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001752129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001771893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001793123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001824047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001875120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001898143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002357 containerd[1459]: time="2025-05-17T00:20:36.001919209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.001937842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.001970812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.001989731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002013891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002035509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002057665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002077651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002116018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002139306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002162571Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002218805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002237857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:20:36.002976 containerd[1459]: time="2025-05-17T00:20:36.002257105Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003490955Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003652881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003675171Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003695346Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003711182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003731474Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003749258Z" level=info msg="NRI interface is disabled by configuration." May 17 00:20:36.004584 containerd[1459]: time="2025-05-17T00:20:36.003765608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:20:36.004981 containerd[1459]: time="2025-05-17T00:20:36.004199942Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:20:36.004981 containerd[1459]: time="2025-05-17T00:20:36.004342125Z" level=info msg="Connect containerd service" May 17 00:20:36.004981 containerd[1459]: time="2025-05-17T00:20:36.004402102Z" level=info msg="using legacy CRI server" May 17 00:20:36.004981 containerd[1459]: time="2025-05-17T00:20:36.004413783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:20:36.010774 containerd[1459]: time="2025-05-17T00:20:36.007071066Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:20:36.010774 containerd[1459]: time="2025-05-17T00:20:36.009950486Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:20:36.011343 containerd[1459]: time="2025-05-17T00:20:36.011073752Z" level=info msg="Start subscribing containerd event" May 17 00:20:36.011343 containerd[1459]: time="2025-05-17T00:20:36.011172393Z" level=info msg="Start recovering state" May 17 00:20:36.011343 containerd[1459]: time="2025-05-17T00:20:36.011291292Z" level=info msg="Start event monitor" May 17 00:20:36.011532 containerd[1459]: time="2025-05-17T00:20:36.011322651Z" level=info msg="Start snapshots syncer" May 17 00:20:36.012589 containerd[1459]: time="2025-05-17T00:20:36.011596195Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:20:36.012589 containerd[1459]: time="2025-05-17T00:20:36.011714533Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:20:36.012589 containerd[1459]: time="2025-05-17T00:20:36.011624124Z" level=info msg="Start cni network conf syncer for default" May 17 00:20:36.012589 containerd[1459]: time="2025-05-17T00:20:36.011746207Z" level=info msg="Start streaming server" May 17 00:20:36.013066 containerd[1459]: time="2025-05-17T00:20:36.012880749Z" level=info msg="containerd successfully booted in 0.198822s" May 17 00:20:36.013099 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:20:36.559938 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:20:36.623211 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:20:36.633064 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:20:36.655351 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:20:36.655659 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:20:36.667117 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:20:36.701787 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:20:36.717646 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:20:36.733087 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:20:36.734348 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:20:36.811729 tar[1455]: linux-amd64/README.md May 17 00:20:36.841144 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:20:37.199859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:37.203241 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:20:37.206890 systemd[1]: Startup finished in 1.093s (kernel) + 6.872s (initrd) + 6.665s (userspace) = 14.631s. May 17 00:20:37.212403 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:37.931557 kubelet[1557]: E0517 00:20:37.931467 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:37.934685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:37.934853 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:37.935229 systemd[1]: kubelet.service: Consumed 1.347s CPU time. May 17 00:20:38.443567 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:20:38.454978 systemd[1]: Started sshd@0-64.23.130.50:22-139.178.68.195:44702.service - OpenSSH per-connection server daemon (139.178.68.195:44702). May 17 00:20:38.528535 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 44702 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:38.531776 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:38.546680 systemd-logind[1445]: New session 1 of user core. May 17 00:20:38.548934 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:20:38.560014 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:20:38.579714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:20:38.593120 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:20:38.597459 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:20:38.756816 systemd[1573]: Queued start job for default target default.target. May 17 00:20:38.768160 systemd[1573]: Created slice app.slice - User Application Slice. May 17 00:20:38.768197 systemd[1573]: Reached target paths.target - Paths. May 17 00:20:38.768213 systemd[1573]: Reached target timers.target - Timers. May 17 00:20:38.770030 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:20:38.788818 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:20:38.788991 systemd[1573]: Reached target sockets.target - Sockets. May 17 00:20:38.789008 systemd[1573]: Reached target basic.target - Basic System. May 17 00:20:38.789064 systemd[1573]: Reached target default.target - Main User Target. May 17 00:20:38.789101 systemd[1573]: Startup finished in 181ms. May 17 00:20:38.789250 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:20:38.796885 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:20:38.874239 systemd[1]: Started sshd@1-64.23.130.50:22-139.178.68.195:44704.service - OpenSSH per-connection server daemon (139.178.68.195:44704). May 17 00:20:38.933198 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 44704 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:38.935504 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:38.941501 systemd-logind[1445]: New session 2 of user core. May 17 00:20:38.950942 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:20:39.018963 sshd[1584]: pam_unix(sshd:session): session closed for user core May 17 00:20:39.032694 systemd[1]: sshd@1-64.23.130.50:22-139.178.68.195:44704.service: Deactivated successfully. May 17 00:20:39.035309 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:20:39.036387 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. May 17 00:20:39.046153 systemd[1]: Started sshd@2-64.23.130.50:22-139.178.68.195:44714.service - OpenSSH per-connection server daemon (139.178.68.195:44714). May 17 00:20:39.048261 systemd-logind[1445]: Removed session 2. May 17 00:20:39.099517 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 44714 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:39.101691 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:39.107557 systemd-logind[1445]: New session 3 of user core. May 17 00:20:39.113851 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:20:39.172336 sshd[1591]: pam_unix(sshd:session): session closed for user core May 17 00:20:39.189733 systemd[1]: sshd@2-64.23.130.50:22-139.178.68.195:44714.service: Deactivated successfully. May 17 00:20:39.192229 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:20:39.194979 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. May 17 00:20:39.205568 systemd[1]: Started sshd@3-64.23.130.50:22-139.178.68.195:44726.service - OpenSSH per-connection server daemon (139.178.68.195:44726). May 17 00:20:39.207637 systemd-logind[1445]: Removed session 3. May 17 00:20:39.248386 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 44726 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:39.251843 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:39.261703 systemd-logind[1445]: New session 4 of user core. May 17 00:20:39.266925 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:20:39.341547 sshd[1598]: pam_unix(sshd:session): session closed for user core May 17 00:20:39.354034 systemd[1]: sshd@3-64.23.130.50:22-139.178.68.195:44726.service: Deactivated successfully. May 17 00:20:39.357179 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:20:39.359794 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. May 17 00:20:39.365351 systemd[1]: Started sshd@4-64.23.130.50:22-139.178.68.195:44742.service - OpenSSH per-connection server daemon (139.178.68.195:44742). May 17 00:20:39.367249 systemd-logind[1445]: Removed session 4. May 17 00:20:39.421859 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 44742 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:39.423727 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:39.430457 systemd-logind[1445]: New session 5 of user core. May 17 00:20:39.436865 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:20:39.511568 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:20:39.512344 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:39.527937 sudo[1608]: pam_unix(sudo:session): session closed for user root May 17 00:20:39.532232 sshd[1605]: pam_unix(sshd:session): session closed for user core May 17 00:20:39.543061 systemd[1]: sshd@4-64.23.130.50:22-139.178.68.195:44742.service: Deactivated successfully. May 17 00:20:39.545207 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:20:39.546250 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. May 17 00:20:39.553026 systemd[1]: Started sshd@5-64.23.130.50:22-139.178.68.195:44754.service - OpenSSH per-connection server daemon (139.178.68.195:44754). May 17 00:20:39.555072 systemd-logind[1445]: Removed session 5. May 17 00:20:39.604232 sshd[1613]: Accepted publickey for core from 139.178.68.195 port 44754 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:39.606632 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:39.612427 systemd-logind[1445]: New session 6 of user core. May 17 00:20:39.628254 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:20:39.691928 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:20:39.692294 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:39.697435 sudo[1617]: pam_unix(sudo:session): session closed for user root May 17 00:20:39.705524 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:20:39.705898 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:39.736174 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:20:39.738402 auditctl[1620]: No rules May 17 00:20:39.738912 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:20:39.739149 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:20:39.742806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:39.793404 augenrules[1638]: No rules May 17 00:20:39.795255 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:39.796913 sudo[1616]: pam_unix(sudo:session): session closed for user root May 17 00:20:39.800834 sshd[1613]: pam_unix(sshd:session): session closed for user core May 17 00:20:39.813156 systemd[1]: sshd@5-64.23.130.50:22-139.178.68.195:44754.service: Deactivated successfully. May 17 00:20:39.815518 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:20:39.816695 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. May 17 00:20:39.822214 systemd[1]: Started sshd@6-64.23.130.50:22-139.178.68.195:44764.service - OpenSSH per-connection server daemon (139.178.68.195:44764). May 17 00:20:39.824746 systemd-logind[1445]: Removed session 6. May 17 00:20:39.880034 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 44764 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:20:39.881480 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:39.888763 systemd-logind[1445]: New session 7 of user core. May 17 00:20:39.899945 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:20:39.966106 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:20:39.967208 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:40.472255 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:20:40.472323 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:20:41.061481 dockerd[1665]: time="2025-05-17T00:20:41.061201046Z" level=info msg="Starting up" May 17 00:20:41.213473 dockerd[1665]: time="2025-05-17T00:20:41.213383438Z" level=info msg="Loading containers: start." May 17 00:20:41.359693 kernel: Initializing XFRM netlink socket May 17 00:20:41.458475 systemd-networkd[1370]: docker0: Link UP May 17 00:20:41.480498 dockerd[1665]: time="2025-05-17T00:20:41.480422634Z" level=info msg="Loading containers: done." May 17 00:20:41.504851 dockerd[1665]: time="2025-05-17T00:20:41.504574697Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:20:41.505075 dockerd[1665]: time="2025-05-17T00:20:41.504958865Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:20:41.505215 dockerd[1665]: time="2025-05-17T00:20:41.505132079Z" level=info msg="Daemon has completed initialization" May 17 00:20:41.506314 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck513570762-merged.mount: Deactivated successfully. May 17 00:20:41.551860 dockerd[1665]: time="2025-05-17T00:20:41.551775840Z" level=info msg="API listen on /run/docker.sock" May 17 00:20:41.552629 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:20:42.387189 containerd[1459]: time="2025-05-17T00:20:42.386796533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:20:42.945550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539336381.mount: Deactivated successfully. May 17 00:20:44.197712 containerd[1459]: time="2025-05-17T00:20:44.197619163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:44.199038 containerd[1459]: time="2025-05-17T00:20:44.198985422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 17 00:20:44.199788 containerd[1459]: time="2025-05-17T00:20:44.199216159Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:44.202559 containerd[1459]: time="2025-05-17T00:20:44.202378532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:44.204574 containerd[1459]: time="2025-05-17T00:20:44.204062110Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.817212431s" May 17 00:20:44.204574 containerd[1459]: time="2025-05-17T00:20:44.204114781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 17 00:20:44.205471 containerd[1459]: time="2025-05-17T00:20:44.205388671Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:20:45.778264 containerd[1459]: time="2025-05-17T00:20:45.778184515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:45.780946 containerd[1459]: time="2025-05-17T00:20:45.780868361Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 17 00:20:45.781970 containerd[1459]: time="2025-05-17T00:20:45.781917513Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:45.785615 containerd[1459]: time="2025-05-17T00:20:45.785543287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:45.787886 containerd[1459]: time="2025-05-17T00:20:45.787825288Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.582393994s" May 17 00:20:45.787886 containerd[1459]: time="2025-05-17T00:20:45.787879211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 17 00:20:45.788455 containerd[1459]: time="2025-05-17T00:20:45.788417583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:20:47.020184 containerd[1459]: time="2025-05-17T00:20:47.020053282Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 17 00:20:47.021420 containerd[1459]: time="2025-05-17T00:20:47.021158195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:47.023982 containerd[1459]: time="2025-05-17T00:20:47.023879701Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:47.028520 containerd[1459]: time="2025-05-17T00:20:47.028447755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:47.031114 containerd[1459]: time="2025-05-17T00:20:47.029906369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.241446933s" May 17 00:20:47.031114 containerd[1459]: time="2025-05-17T00:20:47.029978544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 17 00:20:47.031114 containerd[1459]: time="2025-05-17T00:20:47.030975869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:20:48.135768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:20:48.144712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:48.171758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864334824.mount: Deactivated successfully. May 17 00:20:48.325832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:48.328758 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:48.428351 kubelet[1893]: E0517 00:20:48.428187 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:48.436254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:48.436490 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:49.004958 containerd[1459]: time="2025-05-17T00:20:49.004079133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.004958 containerd[1459]: time="2025-05-17T00:20:49.004895913Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 17 00:20:49.005635 containerd[1459]: time="2025-05-17T00:20:49.005594293Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.008013 containerd[1459]: time="2025-05-17T00:20:49.007955094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.009052 containerd[1459]: time="2025-05-17T00:20:49.009000856Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.977982112s" May 17 00:20:49.009244 containerd[1459]: time="2025-05-17T00:20:49.009217497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:20:49.010351 containerd[1459]: time="2025-05-17T00:20:49.010317934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:20:49.012125 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 17 00:20:49.566200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910970808.mount: Deactivated successfully. May 17 00:20:50.552479 containerd[1459]: time="2025-05-17T00:20:50.551080745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:50.552479 containerd[1459]: time="2025-05-17T00:20:50.552177189Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 17 00:20:50.552479 containerd[1459]: time="2025-05-17T00:20:50.552392116Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:50.556429 containerd[1459]: time="2025-05-17T00:20:50.556361875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:50.558178 containerd[1459]: time="2025-05-17T00:20:50.558098668Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.547613211s" May 17 00:20:50.558434 containerd[1459]: time="2025-05-17T00:20:50.558403623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 17 00:20:50.559479 containerd[1459]: time="2025-05-17T00:20:50.559189210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:20:51.008180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451667425.mount: Deactivated successfully. May 17 00:20:51.014664 containerd[1459]: time="2025-05-17T00:20:51.014575652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.015826 containerd[1459]: time="2025-05-17T00:20:51.015757792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:20:51.016806 containerd[1459]: time="2025-05-17T00:20:51.016716384Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.020830 containerd[1459]: time="2025-05-17T00:20:51.020200611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.021936 containerd[1459]: time="2025-05-17T00:20:51.021730120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 462.257164ms" May 17 00:20:51.021936 containerd[1459]: time="2025-05-17T00:20:51.021793968Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:20:51.023004 containerd[1459]: time="2025-05-17T00:20:51.022965172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:20:52.092988 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 17 00:20:53.867578 containerd[1459]: time="2025-05-17T00:20:53.867309912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:53.868872 containerd[1459]: time="2025-05-17T00:20:53.868785954Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 17 00:20:53.869656 containerd[1459]: time="2025-05-17T00:20:53.869588080Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:53.875569 containerd[1459]: time="2025-05-17T00:20:53.874663788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:53.876823 containerd[1459]: time="2025-05-17T00:20:53.876748050Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.853737663s" May 17 00:20:53.876823 containerd[1459]: time="2025-05-17T00:20:53.876824976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 17 00:20:57.292128 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:57.312057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:57.367626 systemd[1]: Reloading requested from client PID 1995 ('systemctl') (unit session-7.scope)... May 17 00:20:57.367656 systemd[1]: Reloading... May 17 00:20:57.539847 zram_generator::config[2037]: No configuration found. May 17 00:20:57.686415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:57.794588 systemd[1]: Reloading finished in 426 ms. May 17 00:20:57.848143 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:20:57.848259 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:20:57.848864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:57.855988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:58.054971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:58.055456 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:20:58.117450 kubelet[2087]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:58.119572 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:20:58.119572 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:58.119572 kubelet[2087]: I0517 00:20:58.118133 2087 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:20:58.863261 kubelet[2087]: I0517 00:20:58.863207 2087 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:20:58.863615 kubelet[2087]: I0517 00:20:58.863595 2087 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:20:58.864112 kubelet[2087]: I0517 00:20:58.863938 2087 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:20:58.890693 kubelet[2087]: I0517 00:20:58.890650 2087 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:58.892725 kubelet[2087]: E0517 00:20:58.892631 2087 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.130.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:20:58.909335 kubelet[2087]: E0517 00:20:58.909274 2087 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:20:58.909623 kubelet[2087]: I0517 00:20:58.909603 2087 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:20:58.919890 kubelet[2087]: I0517 00:20:58.919840 2087 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:20:58.921874 kubelet[2087]: I0517 00:20:58.921800 2087 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:20:58.925873 kubelet[2087]: I0517 00:20:58.922081 2087 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-6deca81674","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:20:58.926373 kubelet[2087]: I0517 00:20:58.926346 2087 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:20:58.926466 kubelet[2087]: I0517 00:20:58.926456 2087 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:20:58.926755 kubelet[2087]: I0517 00:20:58.926735 2087 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:58.929636 kubelet[2087]: I0517 00:20:58.929560 2087 kubelet.go:480] "Attempting to sync node with API server" May 17 00:20:58.929841 kubelet[2087]: I0517 00:20:58.929822 2087 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:20:58.929972 kubelet[2087]: I0517 00:20:58.929960 2087 kubelet.go:386] "Adding apiserver pod source" May 17 00:20:58.930039 kubelet[2087]: I0517 00:20:58.930031 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:20:58.938129 kubelet[2087]: E0517 00:20:58.938063 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.130.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-6deca81674&limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:20:58.940408 kubelet[2087]: E0517 00:20:58.940218 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.130.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:20:58.940408 kubelet[2087]: I0517 00:20:58.940388 2087 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:20:58.941732 kubelet[2087]: I0517 00:20:58.941068 2087 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:20:58.944708 kubelet[2087]: W0517 00:20:58.943668 2087 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:20:58.948059 kubelet[2087]: I0517 00:20:58.948018 2087 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:20:58.948273 kubelet[2087]: I0517 00:20:58.948110 2087 server.go:1289] "Started kubelet" May 17 00:20:58.951442 kubelet[2087]: I0517 00:20:58.950943 2087 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:20:58.952234 kubelet[2087]: I0517 00:20:58.952194 2087 server.go:317] "Adding debug handlers to kubelet server" May 17 00:20:58.954828 kubelet[2087]: I0517 00:20:58.954749 2087 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:20:58.958166 kubelet[2087]: I0517 00:20:58.957708 2087 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:20:58.959419 kubelet[2087]: E0517 00:20:58.957976 2087 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.130.50:6443/api/v1/namespaces/default/events\": dial tcp 64.23.130.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-6deca81674.184028911175b081 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-6deca81674,UID:ci-4081.3.3-n-6deca81674,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-6deca81674,},FirstTimestamp:2025-05-17 00:20:58.948046977 +0000 UTC m=+0.886511178,LastTimestamp:2025-05-17 00:20:58.948046977 +0000 UTC m=+0.886511178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-6deca81674,}" May 17 00:20:58.961843 kubelet[2087]: I0517 00:20:58.961560 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:20:58.971946 kubelet[2087]: I0517 00:20:58.971904 2087 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:20:58.972716 kubelet[2087]: I0517 00:20:58.972186 2087 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:20:58.973375 kubelet[2087]: E0517 00:20:58.973339 2087 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-6deca81674\" not found" May 17 00:20:58.976589 kubelet[2087]: I0517 00:20:58.976555 2087 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:20:58.977747 kubelet[2087]: I0517 00:20:58.977052 2087 reconciler.go:26] "Reconciler: start to sync state" May 17 00:20:58.980271 kubelet[2087]: E0517 00:20:58.980221 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.130.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:20:58.980431 kubelet[2087]: E0517 00:20:58.980387 2087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-6deca81674?timeout=10s\": dial tcp 64.23.130.50:6443: connect: connection refused" interval="200ms" May 17 00:20:58.980830 kubelet[2087]: I0517 00:20:58.980785 2087 factory.go:223] Registration of the systemd container factory successfully May 17 00:20:58.981168 kubelet[2087]: I0517 00:20:58.980918 2087 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:20:58.984164 kubelet[2087]: I0517 00:20:58.983911 2087 factory.go:223] Registration of the containerd container factory successfully May 17 00:20:58.993660 kubelet[2087]: E0517 00:20:58.992711 2087 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:20:59.008887 kubelet[2087]: I0517 00:20:59.008779 2087 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:20:59.013334 kubelet[2087]: I0517 00:20:59.013277 2087 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:20:59.013334 kubelet[2087]: I0517 00:20:59.013328 2087 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:20:59.013334 kubelet[2087]: I0517 00:20:59.013362 2087 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:20:59.013334 kubelet[2087]: I0517 00:20:59.013379 2087 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:20:59.013770 kubelet[2087]: E0517 00:20:59.013501 2087 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:20:59.017436 kubelet[2087]: E0517 00:20:59.017382 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.130.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:20:59.027332 kubelet[2087]: I0517 00:20:59.027246 2087 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:20:59.027332 kubelet[2087]: I0517 00:20:59.027269 2087 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:20:59.027332 kubelet[2087]: I0517 00:20:59.027296 2087 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:59.029809 kubelet[2087]: I0517 00:20:59.029779 2087 policy_none.go:49] "None policy: Start" May 17 00:20:59.030077 kubelet[2087]: I0517 00:20:59.030016 2087 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:20:59.030235 kubelet[2087]: I0517 00:20:59.030155 2087 state_mem.go:35] "Initializing new in-memory state store" May 17 00:20:59.037632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:20:59.053110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:20:59.059207 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:20:59.065238 kubelet[2087]: E0517 00:20:59.065202 2087 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:20:59.065987 kubelet[2087]: I0517 00:20:59.065829 2087 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:20:59.065987 kubelet[2087]: I0517 00:20:59.065852 2087 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:20:59.066560 kubelet[2087]: I0517 00:20:59.066214 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:20:59.068513 kubelet[2087]: E0517 00:20:59.068390 2087 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:20:59.068513 kubelet[2087]: E0517 00:20:59.068452 2087 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-6deca81674\" not found" May 17 00:20:59.129577 systemd[1]: Created slice kubepods-burstable-poda71997a03ad62d69e6bf49b33be9ed1d.slice - libcontainer container kubepods-burstable-poda71997a03ad62d69e6bf49b33be9ed1d.slice. May 17 00:20:59.150451 kubelet[2087]: E0517 00:20:59.150348 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.156818 systemd[1]: Created slice kubepods-burstable-poddd952e65592dddc06531c9d586144229.slice - libcontainer container kubepods-burstable-poddd952e65592dddc06531c9d586144229.slice. May 17 00:20:59.159764 kubelet[2087]: E0517 00:20:59.159721 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.161746 systemd[1]: Created slice kubepods-burstable-pod367d7a07ea6434e787352dba60ddc86e.slice - libcontainer container kubepods-burstable-pod367d7a07ea6434e787352dba60ddc86e.slice. May 17 00:20:59.164357 kubelet[2087]: E0517 00:20:59.164140 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.167587 kubelet[2087]: I0517 00:20:59.167327 2087 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.167990 kubelet[2087]: E0517 00:20:59.167865 2087 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.130.50:6443/api/v1/nodes\": dial tcp 64.23.130.50:6443: connect: connection refused" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.178503 kubelet[2087]: I0517 00:20:59.178431 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178503 kubelet[2087]: I0517 00:20:59.178498 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178739 kubelet[2087]: I0517 00:20:59.178552 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178739 kubelet[2087]: I0517 00:20:59.178573 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178739 kubelet[2087]: I0517 00:20:59.178600 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178739 kubelet[2087]: I0517 00:20:59.178622 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178739 kubelet[2087]: I0517 00:20:59.178644 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd952e65592dddc06531c9d586144229-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-6deca81674\" (UID: \"dd952e65592dddc06531c9d586144229\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178921 kubelet[2087]: I0517 00:20:59.178667 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:20:59.178921 kubelet[2087]: I0517 00:20:59.178687 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:20:59.182127 kubelet[2087]: E0517 00:20:59.182045 2087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-6deca81674?timeout=10s\": dial tcp 64.23.130.50:6443: connect: connection refused" interval="400ms" May 17 00:20:59.370434 kubelet[2087]: I0517 00:20:59.370387 2087 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.370883 kubelet[2087]: E0517 00:20:59.370839 2087 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.130.50:6443/api/v1/nodes\": dial tcp 64.23.130.50:6443: connect: connection refused" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.452090 kubelet[2087]: E0517 00:20:59.451998 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:20:59.452928 containerd[1459]: time="2025-05-17T00:20:59.452886300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-6deca81674,Uid:a71997a03ad62d69e6bf49b33be9ed1d,Namespace:kube-system,Attempt:0,}" May 17 00:20:59.460587 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 17 00:20:59.461814 kubelet[2087]: E0517 00:20:59.461610 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:20:59.462812 containerd[1459]: time="2025-05-17T00:20:59.462757454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-6deca81674,Uid:dd952e65592dddc06531c9d586144229,Namespace:kube-system,Attempt:0,}" May 17 00:20:59.465495 kubelet[2087]: E0517 00:20:59.465096 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:20:59.466340 containerd[1459]: time="2025-05-17T00:20:59.466031414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-6deca81674,Uid:367d7a07ea6434e787352dba60ddc86e,Namespace:kube-system,Attempt:0,}" May 17 00:20:59.583482 kubelet[2087]: E0517 00:20:59.583400 2087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-6deca81674?timeout=10s\": dial tcp 64.23.130.50:6443: connect: connection refused" interval="800ms" May 17 00:20:59.773105 kubelet[2087]: I0517 00:20:59.772913 2087 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.774083 kubelet[2087]: E0517 00:20:59.773983 2087 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.130.50:6443/api/v1/nodes\": dial tcp 64.23.130.50:6443: connect: connection refused" node="ci-4081.3.3-n-6deca81674" May 17 00:20:59.917901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800540133.mount: Deactivated successfully. May 17 00:20:59.927091 containerd[1459]: time="2025-05-17T00:20:59.925846313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:59.928661 containerd[1459]: time="2025-05-17T00:20:59.928599775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:59.929385 containerd[1459]: time="2025-05-17T00:20:59.929328092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:59.930457 containerd[1459]: time="2025-05-17T00:20:59.930414812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:59.931627 containerd[1459]: time="2025-05-17T00:20:59.931566078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:20:59.932348 containerd[1459]: time="2025-05-17T00:20:59.932181799Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:59.932348 containerd[1459]: time="2025-05-17T00:20:59.932274283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:59.935002 containerd[1459]: time="2025-05-17T00:20:59.934926393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:59.940712 containerd[1459]: time="2025-05-17T00:20:59.940020543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.040678ms" May 17 00:20:59.942780 containerd[1459]: time="2025-05-17T00:20:59.942565914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.446992ms" May 17 00:20:59.945082 containerd[1459]: time="2025-05-17T00:20:59.944711613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.849395ms" May 17 00:21:00.180046 containerd[1459]: time="2025-05-17T00:21:00.178882762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:00.183063 containerd[1459]: time="2025-05-17T00:21:00.180070540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:00.183063 containerd[1459]: time="2025-05-17T00:21:00.182629694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.183063 containerd[1459]: time="2025-05-17T00:21:00.182820802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.197731 containerd[1459]: time="2025-05-17T00:21:00.197419178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:00.198058 containerd[1459]: time="2025-05-17T00:21:00.197688335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:00.198361 containerd[1459]: time="2025-05-17T00:21:00.198035916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.198918 containerd[1459]: time="2025-05-17T00:21:00.198760491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.222004 containerd[1459]: time="2025-05-17T00:21:00.221839796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:00.222004 containerd[1459]: time="2025-05-17T00:21:00.221932281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:00.222415 containerd[1459]: time="2025-05-17T00:21:00.221983889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.227680 containerd[1459]: time="2025-05-17T00:21:00.225683199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:00.233938 systemd[1]: Started cri-containerd-79693633f7ff261bccad1b9414c9976c4dbdf8551b9252bbb3e36a68a02f1b44.scope - libcontainer container 79693633f7ff261bccad1b9414c9976c4dbdf8551b9252bbb3e36a68a02f1b44. May 17 00:21:00.247276 systemd[1]: Started cri-containerd-4fa51928c41a18c867889755d1a79b7ab824f293fd25330e3ff29955b8e10639.scope - libcontainer container 4fa51928c41a18c867889755d1a79b7ab824f293fd25330e3ff29955b8e10639. May 17 00:21:00.302040 systemd[1]: Started cri-containerd-0758b4bf953d6d7ca8d76f039b418bc80cab9bdb18e8961461ee8b64f83bc236.scope - libcontainer container 0758b4bf953d6d7ca8d76f039b418bc80cab9bdb18e8961461ee8b64f83bc236. May 17 00:21:00.305937 kubelet[2087]: E0517 00:21:00.305708 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.130.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:21:00.344493 kubelet[2087]: E0517 00:21:00.344239 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.130.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:21:00.384436 containerd[1459]: time="2025-05-17T00:21:00.383445593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-6deca81674,Uid:a71997a03ad62d69e6bf49b33be9ed1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa51928c41a18c867889755d1a79b7ab824f293fd25330e3ff29955b8e10639\"" May 17 00:21:00.384692 kubelet[2087]: E0517 00:21:00.384058 2087 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.130.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-6deca81674?timeout=10s\": dial tcp 64.23.130.50:6443: connect: connection refused" interval="1.6s" May 17 00:21:00.388291 kubelet[2087]: E0517 00:21:00.388241 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:00.399929 containerd[1459]: time="2025-05-17T00:21:00.399858113Z" level=info msg="CreateContainer within sandbox \"4fa51928c41a18c867889755d1a79b7ab824f293fd25330e3ff29955b8e10639\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:21:00.407211 containerd[1459]: time="2025-05-17T00:21:00.407153420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-6deca81674,Uid:dd952e65592dddc06531c9d586144229,Namespace:kube-system,Attempt:0,} returns sandbox id \"79693633f7ff261bccad1b9414c9976c4dbdf8551b9252bbb3e36a68a02f1b44\"" May 17 00:21:00.409108 kubelet[2087]: E0517 00:21:00.409048 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:00.415427 containerd[1459]: time="2025-05-17T00:21:00.415195405Z" level=info msg="CreateContainer within sandbox \"79693633f7ff261bccad1b9414c9976c4dbdf8551b9252bbb3e36a68a02f1b44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:21:00.423832 containerd[1459]: time="2025-05-17T00:21:00.423771258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-6deca81674,Uid:367d7a07ea6434e787352dba60ddc86e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0758b4bf953d6d7ca8d76f039b418bc80cab9bdb18e8961461ee8b64f83bc236\"" May 17 00:21:00.427418 kubelet[2087]: E0517 00:21:00.427133 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:00.427624 containerd[1459]: time="2025-05-17T00:21:00.427504787Z" level=info msg="CreateContainer within sandbox \"4fa51928c41a18c867889755d1a79b7ab824f293fd25330e3ff29955b8e10639\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08bd6cffdbb118267add0cce414ffdf502748dba202f3025471f524f44bc43af\"" May 17 00:21:00.428606 containerd[1459]: time="2025-05-17T00:21:00.428566250Z" level=info msg="StartContainer for \"08bd6cffdbb118267add0cce414ffdf502748dba202f3025471f524f44bc43af\"" May 17 00:21:00.435615 containerd[1459]: time="2025-05-17T00:21:00.433956559Z" level=info msg="CreateContainer within sandbox \"0758b4bf953d6d7ca8d76f039b418bc80cab9bdb18e8961461ee8b64f83bc236\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:21:00.440149 containerd[1459]: time="2025-05-17T00:21:00.440091570Z" level=info msg="CreateContainer within sandbox \"79693633f7ff261bccad1b9414c9976c4dbdf8551b9252bbb3e36a68a02f1b44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8ff6a042527295298c058efa16394f69a557fb9591370dacaeadde486fa3818e\"" May 17 00:21:00.441222 containerd[1459]: time="2025-05-17T00:21:00.441006824Z" level=info msg="StartContainer for \"8ff6a042527295298c058efa16394f69a557fb9591370dacaeadde486fa3818e\"" May 17 00:21:00.467177 containerd[1459]: time="2025-05-17T00:21:00.466997757Z" level=info msg="CreateContainer within sandbox \"0758b4bf953d6d7ca8d76f039b418bc80cab9bdb18e8961461ee8b64f83bc236\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6c5415bd183d59ad9b594d44788a58118d1e15f660192ad90cc8287a28fd056\"" May 17 00:21:00.468907 containerd[1459]: time="2025-05-17T00:21:00.468315043Z" level=info msg="StartContainer for \"f6c5415bd183d59ad9b594d44788a58118d1e15f660192ad90cc8287a28fd056\"" May 17 00:21:00.485307 kubelet[2087]: E0517 00:21:00.485122 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.130.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:21:00.496886 systemd[1]: Started cri-containerd-08bd6cffdbb118267add0cce414ffdf502748dba202f3025471f524f44bc43af.scope - libcontainer container 08bd6cffdbb118267add0cce414ffdf502748dba202f3025471f524f44bc43af. May 17 00:21:00.505560 kubelet[2087]: E0517 00:21:00.505210 2087 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.130.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-6deca81674&limit=500&resourceVersion=0\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:21:00.514657 systemd[1]: Started cri-containerd-8ff6a042527295298c058efa16394f69a557fb9591370dacaeadde486fa3818e.scope - libcontainer container 8ff6a042527295298c058efa16394f69a557fb9591370dacaeadde486fa3818e. May 17 00:21:00.561818 systemd[1]: Started cri-containerd-f6c5415bd183d59ad9b594d44788a58118d1e15f660192ad90cc8287a28fd056.scope - libcontainer container f6c5415bd183d59ad9b594d44788a58118d1e15f660192ad90cc8287a28fd056. May 17 00:21:00.576228 kubelet[2087]: I0517 00:21:00.576130 2087 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:21:00.578567 kubelet[2087]: E0517 00:21:00.576628 2087 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.130.50:6443/api/v1/nodes\": dial tcp 64.23.130.50:6443: connect: connection refused" node="ci-4081.3.3-n-6deca81674" May 17 00:21:00.589663 containerd[1459]: time="2025-05-17T00:21:00.589294467Z" level=info msg="StartContainer for \"08bd6cffdbb118267add0cce414ffdf502748dba202f3025471f524f44bc43af\" returns successfully" May 17 00:21:00.648145 containerd[1459]: time="2025-05-17T00:21:00.647965489Z" level=info msg="StartContainer for \"8ff6a042527295298c058efa16394f69a557fb9591370dacaeadde486fa3818e\" returns successfully" May 17 00:21:00.681788 containerd[1459]: time="2025-05-17T00:21:00.681721203Z" level=info msg="StartContainer for \"f6c5415bd183d59ad9b594d44788a58118d1e15f660192ad90cc8287a28fd056\" returns successfully" May 17 00:21:00.965884 kubelet[2087]: E0517 00:21:00.965805 2087 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.130.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.130.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:21:01.041407 kubelet[2087]: E0517 00:21:01.041034 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:01.041407 kubelet[2087]: E0517 00:21:01.041199 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:01.043787 kubelet[2087]: E0517 00:21:01.042660 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:01.044792 kubelet[2087]: E0517 00:21:01.044760 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:01.048226 kubelet[2087]: E0517 00:21:01.047464 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:01.048226 kubelet[2087]: E0517 00:21:01.047688 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:02.053075 kubelet[2087]: E0517 00:21:02.053018 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:02.058397 kubelet[2087]: E0517 00:21:02.053181 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:02.058397 kubelet[2087]: E0517 00:21:02.056796 2087 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:02.059226 kubelet[2087]: E0517 00:21:02.058857 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:02.179632 kubelet[2087]: I0517 00:21:02.178249 2087 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:21:03.329283 kubelet[2087]: E0517 00:21:03.329195 2087 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-6deca81674\" not found" node="ci-4081.3.3-n-6deca81674" May 17 00:21:03.481563 kubelet[2087]: I0517 00:21:03.481311 2087 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-6deca81674" May 17 00:21:03.527054 kubelet[2087]: E0517 00:21:03.526903 2087 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-n-6deca81674.184028911175b081 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-6deca81674,UID:ci-4081.3.3-n-6deca81674,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-6deca81674,},FirstTimestamp:2025-05-17 00:20:58.948046977 +0000 UTC m=+0.886511178,LastTimestamp:2025-05-17 00:20:58.948046977 +0000 UTC m=+0.886511178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-6deca81674,}" May 17 00:21:03.577673 kubelet[2087]: I0517 00:21:03.577424 2087 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:03.597948 kubelet[2087]: E0517 00:21:03.597564 2087 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:03.597948 kubelet[2087]: I0517 00:21:03.597613 2087 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:03.611925 kubelet[2087]: E0517 00:21:03.611644 2087 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-6deca81674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:03.611925 kubelet[2087]: I0517 00:21:03.611704 2087 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:03.619002 kubelet[2087]: E0517 00:21:03.618935 2087 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-6deca81674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:03.941721 kubelet[2087]: I0517 00:21:03.941633 2087 apiserver.go:52] "Watching apiserver" May 17 00:21:03.978069 kubelet[2087]: I0517 00:21:03.977991 2087 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:04.921106 kubelet[2087]: I0517 00:21:04.920849 2087 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:04.931924 kubelet[2087]: I0517 00:21:04.931846 2087 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:04.932313 kubelet[2087]: E0517 00:21:04.932281 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:05.063283 kubelet[2087]: E0517 00:21:05.063230 2087 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:06.032189 systemd[1]: Reloading requested from client PID 2368 ('systemctl') (unit session-7.scope)... May 17 00:21:06.032224 systemd[1]: Reloading... May 17 00:21:06.185733 zram_generator::config[2413]: No configuration found. May 17 00:21:06.389701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:06.493941 systemd[1]: Reloading finished in 461 ms. May 17 00:21:06.552585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:06.573280 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:21:06.573681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:06.573784 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 126.8M memory peak, 0B memory swap peak. May 17 00:21:06.578990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:06.742136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:06.759068 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:06.860358 kubelet[2458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:06.860358 kubelet[2458]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:06.860358 kubelet[2458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:06.860920 kubelet[2458]: I0517 00:21:06.860420 2458 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:06.873702 kubelet[2458]: I0517 00:21:06.873644 2458 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:21:06.873702 kubelet[2458]: I0517 00:21:06.873688 2458 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:06.874095 kubelet[2458]: I0517 00:21:06.874034 2458 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:21:06.877723 kubelet[2458]: I0517 00:21:06.877653 2458 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:21:06.886862 kubelet[2458]: I0517 00:21:06.886453 2458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:06.890570 kubelet[2458]: E0517 00:21:06.890252 2458 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:06.890570 kubelet[2458]: I0517 00:21:06.890326 2458 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:06.897271 kubelet[2458]: I0517 00:21:06.897120 2458 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:06.900051 kubelet[2458]: I0517 00:21:06.899854 2458 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:06.900416 kubelet[2458]: I0517 00:21:06.899916 2458 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-6deca81674","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:06.901232 kubelet[2458]: I0517 00:21:06.900785 2458 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:06.901232 kubelet[2458]: I0517 00:21:06.900812 2458 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:21:06.901232 kubelet[2458]: I0517 00:21:06.900886 2458 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:06.901691 kubelet[2458]: I0517 00:21:06.901668 2458 kubelet.go:480] "Attempting to sync node with API server" May 17 00:21:06.901804 kubelet[2458]: I0517 00:21:06.901793 2458 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:06.902640 kubelet[2458]: I0517 00:21:06.902618 2458 kubelet.go:386] "Adding apiserver pod source" May 17 00:21:06.902802 kubelet[2458]: I0517 00:21:06.902786 2458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:06.910653 kubelet[2458]: I0517 00:21:06.908441 2458 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:06.910653 kubelet[2458]: I0517 00:21:06.909308 2458 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:21:06.915599 kubelet[2458]: I0517 00:21:06.915501 2458 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:06.915740 kubelet[2458]: I0517 00:21:06.915633 2458 server.go:1289] "Started kubelet" May 17 00:21:06.920555 kubelet[2458]: I0517 00:21:06.920317 2458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:06.930005 kubelet[2458]: I0517 00:21:06.929931 2458 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:06.934809 kubelet[2458]: I0517 00:21:06.934759 2458 server.go:317] "Adding debug handlers to kubelet server" May 17 00:21:06.946100 kubelet[2458]: I0517 00:21:06.946002 2458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:06.947617 kubelet[2458]: I0517 00:21:06.946512 2458 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:06.952784 kubelet[2458]: I0517 00:21:06.952736 2458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:06.953219 kubelet[2458]: I0517 00:21:06.953188 2458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:21:06.956125 kubelet[2458]: I0517 00:21:06.956084 2458 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:06.956440 kubelet[2458]: E0517 00:21:06.956409 2458 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-6deca81674\" not found" May 17 00:21:06.959575 kubelet[2458]: I0517 00:21:06.959179 2458 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:06.959575 kubelet[2458]: I0517 00:21:06.959423 2458 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:06.970203 kubelet[2458]: I0517 00:21:06.970167 2458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:21:06.970406 kubelet[2458]: I0517 00:21:06.970395 2458 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:21:06.970489 kubelet[2458]: I0517 00:21:06.970472 2458 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:06.970978 kubelet[2458]: I0517 00:21:06.970606 2458 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:21:06.970978 kubelet[2458]: E0517 00:21:06.970669 2458 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:06.975001 kubelet[2458]: I0517 00:21:06.974939 2458 factory.go:223] Registration of the systemd container factory successfully May 17 00:21:06.976267 kubelet[2458]: I0517 00:21:06.976216 2458 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:06.980389 kubelet[2458]: I0517 00:21:06.980326 2458 factory.go:223] Registration of the containerd container factory successfully May 17 00:21:06.982698 kubelet[2458]: E0517 00:21:06.982225 2458 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:07.072074 kubelet[2458]: E0517 00:21:07.070838 2458 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.083955 2458 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.083985 2458 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084022 2458 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084236 2458 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084252 2458 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084276 2458 policy_none.go:49] "None policy: Start" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084291 2458 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:07.084362 kubelet[2458]: I0517 00:21:07.084304 2458 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:07.084721 kubelet[2458]: I0517 00:21:07.084448 2458 state_mem.go:75] "Updated machine memory state" May 17 00:21:07.094179 kubelet[2458]: E0517 00:21:07.094081 2458 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:21:07.095447 kubelet[2458]: I0517 00:21:07.095366 2458 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:07.097061 kubelet[2458]: I0517 00:21:07.096633 2458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:07.097447 kubelet[2458]: I0517 00:21:07.097414 2458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:07.104888 kubelet[2458]: E0517 00:21:07.104857 2458 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:07.210257 kubelet[2458]: I0517 00:21:07.210198 2458 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-6deca81674" May 17 00:21:07.226990 kubelet[2458]: I0517 00:21:07.226403 2458 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.3-n-6deca81674" May 17 00:21:07.226990 kubelet[2458]: I0517 00:21:07.226557 2458 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-6deca81674" May 17 00:21:07.273681 kubelet[2458]: I0517 00:21:07.272771 2458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:07.275574 kubelet[2458]: I0517 00:21:07.274492 2458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:07.276532 kubelet[2458]: I0517 00:21:07.276338 2458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.284970 kubelet[2458]: I0517 00:21:07.284835 2458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:07.285491 kubelet[2458]: I0517 00:21:07.285387 2458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:07.287534 kubelet[2458]: I0517 00:21:07.287466 2458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:07.287931 kubelet[2458]: E0517 00:21:07.287685 2458 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-6deca81674\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363363 kubelet[2458]: I0517 00:21:07.362919 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363363 kubelet[2458]: I0517 00:21:07.362974 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363363 kubelet[2458]: I0517 00:21:07.363005 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363363 kubelet[2458]: I0517 00:21:07.363027 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363363 kubelet[2458]: I0517 00:21:07.363047 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363880 kubelet[2458]: I0517 00:21:07.363063 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd952e65592dddc06531c9d586144229-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-6deca81674\" (UID: \"dd952e65592dddc06531c9d586144229\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363880 kubelet[2458]: I0517 00:21:07.363080 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363880 kubelet[2458]: I0517 00:21:07.363095 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/367d7a07ea6434e787352dba60ddc86e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-6deca81674\" (UID: \"367d7a07ea6434e787352dba60ddc86e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" May 17 00:21:07.363880 kubelet[2458]: I0517 00:21:07.363131 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71997a03ad62d69e6bf49b33be9ed1d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-6deca81674\" (UID: \"a71997a03ad62d69e6bf49b33be9ed1d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" May 17 00:21:07.586041 kubelet[2458]: E0517 00:21:07.585643 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:07.586041 kubelet[2458]: E0517 00:21:07.585918 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:07.591167 kubelet[2458]: E0517 00:21:07.591115 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:07.918924 kubelet[2458]: I0517 00:21:07.918619 2458 apiserver.go:52] "Watching apiserver" May 17 00:21:07.959734 kubelet[2458]: I0517 00:21:07.959573 2458 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:08.046898 kubelet[2458]: I0517 00:21:08.045766 2458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:08.046898 kubelet[2458]: E0517 00:21:08.046043 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:08.047322 kubelet[2458]: E0517 00:21:08.047298 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:08.057693 kubelet[2458]: I0517 00:21:08.057123 2458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:08.057693 kubelet[2458]: E0517 00:21:08.057212 2458 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-6deca81674\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" May 17 00:21:08.057693 kubelet[2458]: E0517 00:21:08.057459 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:08.119503 kubelet[2458]: I0517 00:21:08.118911 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-6deca81674" podStartSLOduration=4.118888161 podStartE2EDuration="4.118888161s" podCreationTimestamp="2025-05-17 00:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:08.102384786 +0000 UTC m=+1.334667571" watchObservedRunningTime="2025-05-17 00:21:08.118888161 +0000 UTC m=+1.351170941" May 17 00:21:08.137548 kubelet[2458]: I0517 00:21:08.137386 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-6deca81674" podStartSLOduration=1.137357373 podStartE2EDuration="1.137357373s" podCreationTimestamp="2025-05-17 00:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:08.119364333 +0000 UTC m=+1.351647120" watchObservedRunningTime="2025-05-17 00:21:08.137357373 +0000 UTC m=+1.369640159" May 17 00:21:08.139871 kubelet[2458]: I0517 00:21:08.139499 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-6deca81674" podStartSLOduration=1.139383828 podStartE2EDuration="1.139383828s" podCreationTimestamp="2025-05-17 00:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:08.139297372 +0000 UTC m=+1.371580157" watchObservedRunningTime="2025-05-17 00:21:08.139383828 +0000 UTC m=+1.371666614" May 17 00:21:09.048341 kubelet[2458]: E0517 00:21:09.048168 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:09.048341 kubelet[2458]: E0517 00:21:09.048189 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:10.060918 kubelet[2458]: E0517 00:21:10.060811 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:10.851562 kubelet[2458]: I0517 00:21:10.851475 2458 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:21:10.852876 containerd[1459]: time="2025-05-17T00:21:10.852682120Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:21:10.853473 kubelet[2458]: I0517 00:21:10.852994 2458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:21:11.052641 kubelet[2458]: E0517 00:21:11.052291 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:11.285042 kubelet[2458]: E0517 00:21:11.284980 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:11.448820 kubelet[2458]: E0517 00:21:11.448731 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:11.617012 systemd[1]: Created slice kubepods-besteffort-pode458e8dc_5cff_4c3a_8d2e_e5d4f39ea005.slice - libcontainer container kubepods-besteffort-pode458e8dc_5cff_4c3a_8d2e_e5d4f39ea005.slice. May 17 00:21:11.692438 kubelet[2458]: I0517 00:21:11.692301 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005-xtables-lock\") pod \"kube-proxy-zrl5r\" (UID: \"e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005\") " pod="kube-system/kube-proxy-zrl5r" May 17 00:21:11.692438 kubelet[2458]: I0517 00:21:11.692435 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkm8\" (UniqueName: \"kubernetes.io/projected/e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005-kube-api-access-btkm8\") pod \"kube-proxy-zrl5r\" (UID: \"e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005\") " pod="kube-system/kube-proxy-zrl5r" May 17 00:21:11.692438 kubelet[2458]: I0517 00:21:11.692460 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005-kube-proxy\") pod \"kube-proxy-zrl5r\" (UID: \"e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005\") " pod="kube-system/kube-proxy-zrl5r" May 17 00:21:11.692824 kubelet[2458]: I0517 00:21:11.692477 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005-lib-modules\") pod \"kube-proxy-zrl5r\" (UID: \"e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005\") " pod="kube-system/kube-proxy-zrl5r" May 17 00:21:11.927631 kubelet[2458]: E0517 00:21:11.927533 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:11.928331 containerd[1459]: time="2025-05-17T00:21:11.928253195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrl5r,Uid:e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005,Namespace:kube-system,Attempt:0,}" May 17 00:21:11.984691 containerd[1459]: time="2025-05-17T00:21:11.983988922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:11.985069 containerd[1459]: time="2025-05-17T00:21:11.984712891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:11.985069 containerd[1459]: time="2025-05-17T00:21:11.984745135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:11.985069 containerd[1459]: time="2025-05-17T00:21:11.984943271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:12.032084 systemd[1]: run-containerd-runc-k8s.io-d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2-runc.JWKGx5.mount: Deactivated successfully. May 17 00:21:12.043893 systemd[1]: Started cri-containerd-d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2.scope - libcontainer container d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2. May 17 00:21:12.055564 kubelet[2458]: E0517 00:21:12.054949 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:12.056298 kubelet[2458]: E0517 00:21:12.055969 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:12.113235 containerd[1459]: time="2025-05-17T00:21:12.113172041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrl5r,Uid:e458e8dc-5cff-4c3a-8d2e-e5d4f39ea005,Namespace:kube-system,Attempt:0,} returns sandbox id \"d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2\"" May 17 00:21:12.114806 kubelet[2458]: E0517 00:21:12.114750 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:12.128216 containerd[1459]: time="2025-05-17T00:21:12.127675332Z" level=info msg="CreateContainer within sandbox \"d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:21:12.149907 containerd[1459]: time="2025-05-17T00:21:12.149826460Z" level=info msg="CreateContainer within sandbox \"d55300ca6f1d4c6559c9a8bed3998505c68996e5b9b16c3bf9f562f7a4d1b1f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ed135ff19d8d291f3dd6d4a045bd3d92ff7a03a9a225fae34f64d01e7d62cdc\"" May 17 00:21:12.154435 containerd[1459]: time="2025-05-17T00:21:12.151867949Z" level=info msg="StartContainer for \"5ed135ff19d8d291f3dd6d4a045bd3d92ff7a03a9a225fae34f64d01e7d62cdc\"" May 17 00:21:12.208816 systemd[1]: Started cri-containerd-5ed135ff19d8d291f3dd6d4a045bd3d92ff7a03a9a225fae34f64d01e7d62cdc.scope - libcontainer container 5ed135ff19d8d291f3dd6d4a045bd3d92ff7a03a9a225fae34f64d01e7d62cdc. May 17 00:21:12.250272 systemd[1]: Created slice kubepods-besteffort-pod674ae83b_884b_4cec_a7a9_3049d3dc3231.slice - libcontainer container kubepods-besteffort-pod674ae83b_884b_4cec_a7a9_3049d3dc3231.slice. May 17 00:21:12.282336 containerd[1459]: time="2025-05-17T00:21:12.281986715Z" level=info msg="StartContainer for \"5ed135ff19d8d291f3dd6d4a045bd3d92ff7a03a9a225fae34f64d01e7d62cdc\" returns successfully" May 17 00:21:12.299396 kubelet[2458]: I0517 00:21:12.299182 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/674ae83b-884b-4cec-a7a9-3049d3dc3231-var-lib-calico\") pod \"tigera-operator-844669ff44-fqxnb\" (UID: \"674ae83b-884b-4cec-a7a9-3049d3dc3231\") " pod="tigera-operator/tigera-operator-844669ff44-fqxnb" May 17 00:21:12.299396 kubelet[2458]: I0517 00:21:12.299266 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs4k\" (UniqueName: \"kubernetes.io/projected/674ae83b-884b-4cec-a7a9-3049d3dc3231-kube-api-access-2hs4k\") pod \"tigera-operator-844669ff44-fqxnb\" (UID: \"674ae83b-884b-4cec-a7a9-3049d3dc3231\") " pod="tigera-operator/tigera-operator-844669ff44-fqxnb" May 17 00:21:12.556278 containerd[1459]: time="2025-05-17T00:21:12.556110897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-fqxnb,Uid:674ae83b-884b-4cec-a7a9-3049d3dc3231,Namespace:tigera-operator,Attempt:0,}" May 17 00:21:12.605410 containerd[1459]: time="2025-05-17T00:21:12.605213471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:12.605410 containerd[1459]: time="2025-05-17T00:21:12.605339453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:12.605410 containerd[1459]: time="2025-05-17T00:21:12.605364255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:12.605886 containerd[1459]: time="2025-05-17T00:21:12.605539668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:12.639026 systemd[1]: Started cri-containerd-74ee05e2d0521da46e0f4803bf409e4c9e98928a6a5b3748247b417ab68d94b6.scope - libcontainer container 74ee05e2d0521da46e0f4803bf409e4c9e98928a6a5b3748247b417ab68d94b6. May 17 00:21:12.704858 containerd[1459]: time="2025-05-17T00:21:12.704030733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-fqxnb,Uid:674ae83b-884b-4cec-a7a9-3049d3dc3231,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"74ee05e2d0521da46e0f4803bf409e4c9e98928a6a5b3748247b417ab68d94b6\"" May 17 00:21:12.710787 containerd[1459]: time="2025-05-17T00:21:12.710658094Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:21:13.062099 kubelet[2458]: E0517 00:21:13.060069 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:13.062099 kubelet[2458]: E0517 00:21:13.060238 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:13.078117 kubelet[2458]: I0517 00:21:13.078054 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zrl5r" podStartSLOduration=2.077642317 podStartE2EDuration="2.077642317s" podCreationTimestamp="2025-05-17 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:13.077591622 +0000 UTC m=+6.309874406" watchObservedRunningTime="2025-05-17 00:21:13.077642317 +0000 UTC m=+6.309925102" May 17 00:21:15.151257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762429062.mount: Deactivated successfully. May 17 00:21:16.131561 containerd[1459]: time="2025-05-17T00:21:16.131450363Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.134304 containerd[1459]: time="2025-05-17T00:21:16.134209900Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:21:16.137376 containerd[1459]: time="2025-05-17T00:21:16.137270646Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.142709 containerd[1459]: time="2025-05-17T00:21:16.142621238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.144433 containerd[1459]: time="2025-05-17T00:21:16.143492642Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.432775748s" May 17 00:21:16.144433 containerd[1459]: time="2025-05-17T00:21:16.143551861Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:21:16.151204 containerd[1459]: time="2025-05-17T00:21:16.151150536Z" level=info msg="CreateContainer within sandbox \"74ee05e2d0521da46e0f4803bf409e4c9e98928a6a5b3748247b417ab68d94b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:21:16.165098 containerd[1459]: time="2025-05-17T00:21:16.164897200Z" level=info msg="CreateContainer within sandbox \"74ee05e2d0521da46e0f4803bf409e4c9e98928a6a5b3748247b417ab68d94b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a1ddfabd73d9a243206e5ada3d34474ca6433ce0221d2de7f089ea4f839c7ae\"" May 17 00:21:16.168195 containerd[1459]: time="2025-05-17T00:21:16.166484418Z" level=info msg="StartContainer for \"9a1ddfabd73d9a243206e5ada3d34474ca6433ce0221d2de7f089ea4f839c7ae\"" May 17 00:21:16.225068 systemd[1]: Started cri-containerd-9a1ddfabd73d9a243206e5ada3d34474ca6433ce0221d2de7f089ea4f839c7ae.scope - libcontainer container 9a1ddfabd73d9a243206e5ada3d34474ca6433ce0221d2de7f089ea4f839c7ae. May 17 00:21:16.278942 containerd[1459]: time="2025-05-17T00:21:16.278453654Z" level=info msg="StartContainer for \"9a1ddfabd73d9a243206e5ada3d34474ca6433ce0221d2de7f089ea4f839c7ae\" returns successfully" May 17 00:21:20.581630 update_engine[1447]: I20250517 00:21:20.580610 1447 update_attempter.cc:509] Updating boot flags... May 17 00:21:20.653440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2825) May 17 00:21:21.958305 sudo[1649]: pam_unix(sudo:session): session closed for user root May 17 00:21:21.966556 sshd[1646]: pam_unix(sshd:session): session closed for user core May 17 00:21:21.973344 systemd[1]: sshd@6-64.23.130.50:22-139.178.68.195:44764.service: Deactivated successfully. May 17 00:21:21.985368 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:21:21.987244 systemd[1]: session-7.scope: Consumed 6.210s CPU time, 145.8M memory peak, 0B memory swap peak. May 17 00:21:21.989970 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. May 17 00:21:21.994919 systemd-logind[1445]: Removed session 7. May 17 00:21:26.569483 kubelet[2458]: I0517 00:21:26.567757 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-fqxnb" podStartSLOduration=11.131659425 podStartE2EDuration="14.567704463s" podCreationTimestamp="2025-05-17 00:21:12 +0000 UTC" firstStartedPulling="2025-05-17 00:21:12.708800692 +0000 UTC m=+5.941083457" lastFinishedPulling="2025-05-17 00:21:16.144845731 +0000 UTC m=+9.377128495" observedRunningTime="2025-05-17 00:21:17.085664351 +0000 UTC m=+10.317947136" watchObservedRunningTime="2025-05-17 00:21:26.567704463 +0000 UTC m=+19.799987251" May 17 00:21:26.587483 systemd[1]: Created slice kubepods-besteffort-poda4677331_24af_49c7_bd02_e4a60aa198b9.slice - libcontainer container kubepods-besteffort-poda4677331_24af_49c7_bd02_e4a60aa198b9.slice. May 17 00:21:26.601233 kubelet[2458]: I0517 00:21:26.601175 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4677331-24af-49c7-bd02-e4a60aa198b9-tigera-ca-bundle\") pod \"calico-typha-6b5b45f66c-m8v47\" (UID: \"a4677331-24af-49c7-bd02-e4a60aa198b9\") " pod="calico-system/calico-typha-6b5b45f66c-m8v47" May 17 00:21:26.601233 kubelet[2458]: I0517 00:21:26.601222 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4677331-24af-49c7-bd02-e4a60aa198b9-typha-certs\") pod \"calico-typha-6b5b45f66c-m8v47\" (UID: \"a4677331-24af-49c7-bd02-e4a60aa198b9\") " pod="calico-system/calico-typha-6b5b45f66c-m8v47" May 17 00:21:26.601233 kubelet[2458]: I0517 00:21:26.601245 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzdsm\" (UniqueName: \"kubernetes.io/projected/a4677331-24af-49c7-bd02-e4a60aa198b9-kube-api-access-nzdsm\") pod \"calico-typha-6b5b45f66c-m8v47\" (UID: \"a4677331-24af-49c7-bd02-e4a60aa198b9\") " pod="calico-system/calico-typha-6b5b45f66c-m8v47" May 17 00:21:26.831852 systemd[1]: Created slice kubepods-besteffort-pod7804134d_f3ff_4eb7_8c0e_951ae52af78d.slice - libcontainer container kubepods-besteffort-pod7804134d_f3ff_4eb7_8c0e_951ae52af78d.slice. May 17 00:21:26.893329 kubelet[2458]: E0517 00:21:26.893144 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:26.894572 containerd[1459]: time="2025-05-17T00:21:26.894236650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5b45f66c-m8v47,Uid:a4677331-24af-49c7-bd02-e4a60aa198b9,Namespace:calico-system,Attempt:0,}" May 17 00:21:26.904868 kubelet[2458]: I0517 00:21:26.904812 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-var-run-calico\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.905901 kubelet[2458]: I0517 00:21:26.905091 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-cni-log-dir\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.905901 kubelet[2458]: I0517 00:21:26.905129 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7804134d-f3ff-4eb7-8c0e-951ae52af78d-node-certs\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.905901 kubelet[2458]: I0517 00:21:26.905154 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-policysync\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.905901 kubelet[2458]: I0517 00:21:26.905177 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb45b\" (UniqueName: \"kubernetes.io/projected/7804134d-f3ff-4eb7-8c0e-951ae52af78d-kube-api-access-pb45b\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.905901 kubelet[2458]: I0517 00:21:26.905226 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-flexvol-driver-host\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907402 kubelet[2458]: I0517 00:21:26.905248 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7804134d-f3ff-4eb7-8c0e-951ae52af78d-tigera-ca-bundle\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907402 kubelet[2458]: I0517 00:21:26.905271 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-lib-modules\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907402 kubelet[2458]: I0517 00:21:26.905300 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-cni-bin-dir\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907402 kubelet[2458]: I0517 00:21:26.905324 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-cni-net-dir\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907402 kubelet[2458]: I0517 00:21:26.905356 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-var-lib-calico\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.907726 kubelet[2458]: I0517 00:21:26.905382 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7804134d-f3ff-4eb7-8c0e-951ae52af78d-xtables-lock\") pod \"calico-node-4skj9\" (UID: \"7804134d-f3ff-4eb7-8c0e-951ae52af78d\") " pod="calico-system/calico-node-4skj9" May 17 00:21:26.944552 containerd[1459]: time="2025-05-17T00:21:26.942884061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:26.945325 containerd[1459]: time="2025-05-17T00:21:26.944581822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:26.945325 containerd[1459]: time="2025-05-17T00:21:26.944657288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:26.945325 containerd[1459]: time="2025-05-17T00:21:26.944869034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:27.008042 kubelet[2458]: E0517 00:21:27.007907 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:27.026807 systemd[1]: Started cri-containerd-5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b.scope - libcontainer container 5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b. May 17 00:21:27.078913 kubelet[2458]: E0517 00:21:27.078789 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.079969 kubelet[2458]: W0517 00:21:27.079805 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.079969 kubelet[2458]: E0517 00:21:27.079877 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.090242 kubelet[2458]: E0517 00:21:27.090120 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.092475 kubelet[2458]: W0517 00:21:27.091212 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.092475 kubelet[2458]: E0517 00:21:27.091272 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.093355 kubelet[2458]: E0517 00:21:27.092948 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.093355 kubelet[2458]: W0517 00:21:27.092978 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.093355 kubelet[2458]: E0517 00:21:27.093014 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.095053 kubelet[2458]: E0517 00:21:27.094723 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.095053 kubelet[2458]: W0517 00:21:27.094763 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.095053 kubelet[2458]: E0517 00:21:27.094789 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.096168 kubelet[2458]: E0517 00:21:27.095501 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.096168 kubelet[2458]: W0517 00:21:27.095534 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.096168 kubelet[2458]: E0517 00:21:27.095555 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.098354 kubelet[2458]: E0517 00:21:27.098326 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.098582 kubelet[2458]: W0517 00:21:27.098549 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.099308 kubelet[2458]: E0517 00:21:27.099096 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.100240 kubelet[2458]: E0517 00:21:27.100197 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.100504 kubelet[2458]: W0517 00:21:27.100486 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.100814 kubelet[2458]: E0517 00:21:27.100770 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.102718 kubelet[2458]: E0517 00:21:27.102542 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.102718 kubelet[2458]: W0517 00:21:27.102559 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.102718 kubelet[2458]: E0517 00:21:27.102578 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.104273 kubelet[2458]: E0517 00:21:27.104097 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.104273 kubelet[2458]: W0517 00:21:27.104114 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.104273 kubelet[2458]: E0517 00:21:27.104132 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.106264 kubelet[2458]: E0517 00:21:27.106084 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.106264 kubelet[2458]: W0517 00:21:27.106100 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.106264 kubelet[2458]: E0517 00:21:27.106118 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.111147 kubelet[2458]: E0517 00:21:27.108903 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.111147 kubelet[2458]: W0517 00:21:27.108927 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.111147 kubelet[2458]: E0517 00:21:27.108948 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.111671 kubelet[2458]: E0517 00:21:27.111413 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.111671 kubelet[2458]: W0517 00:21:27.111435 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.111671 kubelet[2458]: E0517 00:21:27.111461 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.112426 kubelet[2458]: E0517 00:21:27.112404 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.112868 kubelet[2458]: W0517 00:21:27.112505 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.113122 kubelet[2458]: E0517 00:21:27.113107 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.113474 kubelet[2458]: E0517 00:21:27.113462 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.113596 kubelet[2458]: W0517 00:21:27.113584 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.113861 kubelet[2458]: E0517 00:21:27.113642 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.116572 kubelet[2458]: E0517 00:21:27.116427 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.116920 kubelet[2458]: W0517 00:21:27.116890 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.117346 kubelet[2458]: E0517 00:21:27.117145 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.117826 kubelet[2458]: E0517 00:21:27.117807 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.118076 kubelet[2458]: W0517 00:21:27.117929 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.118076 kubelet[2458]: E0517 00:21:27.117980 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.120701 kubelet[2458]: E0517 00:21:27.120670 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.120912 kubelet[2458]: W0517 00:21:27.120844 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.121135 kubelet[2458]: E0517 00:21:27.121093 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.122179 kubelet[2458]: E0517 00:21:27.121695 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.122179 kubelet[2458]: W0517 00:21:27.121709 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.122179 kubelet[2458]: E0517 00:21:27.121726 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.122179 kubelet[2458]: E0517 00:21:27.122031 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.122179 kubelet[2458]: W0517 00:21:27.122041 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.122179 kubelet[2458]: E0517 00:21:27.122052 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.122778 kubelet[2458]: E0517 00:21:27.122677 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.122883 kubelet[2458]: W0517 00:21:27.122866 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.122955 kubelet[2458]: E0517 00:21:27.122943 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.123425 kubelet[2458]: E0517 00:21:27.123411 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.123648 kubelet[2458]: W0517 00:21:27.123621 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.124103 kubelet[2458]: E0517 00:21:27.124083 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.127262 kubelet[2458]: E0517 00:21:27.126613 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.127262 kubelet[2458]: W0517 00:21:27.126636 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.127262 kubelet[2458]: E0517 00:21:27.126657 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.127262 kubelet[2458]: I0517 00:21:27.126709 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3741cab5-ed2e-41cc-bf79-c5ceb8c1246a-registration-dir\") pod \"csi-node-driver-789m7\" (UID: \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\") " pod="calico-system/csi-node-driver-789m7" May 17 00:21:27.127262 kubelet[2458]: E0517 00:21:27.126988 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.127262 kubelet[2458]: W0517 00:21:27.127021 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.127262 kubelet[2458]: E0517 00:21:27.127035 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.127262 kubelet[2458]: I0517 00:21:27.127065 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3741cab5-ed2e-41cc-bf79-c5ceb8c1246a-socket-dir\") pod \"csi-node-driver-789m7\" (UID: \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\") " pod="calico-system/csi-node-driver-789m7" May 17 00:21:27.128023 kubelet[2458]: E0517 00:21:27.127871 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.128023 kubelet[2458]: W0517 00:21:27.127886 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.128023 kubelet[2458]: E0517 00:21:27.127918 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.128626 kubelet[2458]: E0517 00:21:27.128605 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.128711 kubelet[2458]: W0517 00:21:27.128699 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.130574 kubelet[2458]: E0517 00:21:27.130353 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.132387 kubelet[2458]: E0517 00:21:27.132356 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.132574 kubelet[2458]: W0517 00:21:27.132556 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.132879 kubelet[2458]: E0517 00:21:27.132628 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.132879 kubelet[2458]: I0517 00:21:27.132825 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577wp\" (UniqueName: \"kubernetes.io/projected/3741cab5-ed2e-41cc-bf79-c5ceb8c1246a-kube-api-access-577wp\") pod \"csi-node-driver-789m7\" (UID: \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\") " pod="calico-system/csi-node-driver-789m7" May 17 00:21:27.134574 kubelet[2458]: E0517 00:21:27.134543 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.134746 kubelet[2458]: W0517 00:21:27.134725 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.134885 kubelet[2458]: E0517 00:21:27.134807 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.135359 kubelet[2458]: E0517 00:21:27.135341 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.135438 kubelet[2458]: W0517 00:21:27.135428 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.135498 kubelet[2458]: E0517 00:21:27.135481 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.136163 kubelet[2458]: E0517 00:21:27.136148 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.136241 kubelet[2458]: W0517 00:21:27.136229 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.136316 kubelet[2458]: E0517 00:21:27.136303 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.136603 kubelet[2458]: I0517 00:21:27.136576 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3741cab5-ed2e-41cc-bf79-c5ceb8c1246a-kubelet-dir\") pod \"csi-node-driver-789m7\" (UID: \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\") " pod="calico-system/csi-node-driver-789m7" May 17 00:21:27.137764 containerd[1459]: time="2025-05-17T00:21:27.137730216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4skj9,Uid:7804134d-f3ff-4eb7-8c0e-951ae52af78d,Namespace:calico-system,Attempt:0,}" May 17 00:21:27.138298 kubelet[2458]: E0517 00:21:27.138278 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.138437 kubelet[2458]: W0517 00:21:27.138421 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.138576 kubelet[2458]: E0517 00:21:27.138562 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.139621 kubelet[2458]: E0517 00:21:27.139603 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.139728 kubelet[2458]: W0517 00:21:27.139713 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.139810 kubelet[2458]: E0517 00:21:27.139799 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.141246 kubelet[2458]: E0517 00:21:27.141228 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.141354 kubelet[2458]: W0517 00:21:27.141342 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.141441 kubelet[2458]: E0517 00:21:27.141427 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.142088 kubelet[2458]: I0517 00:21:27.141852 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3741cab5-ed2e-41cc-bf79-c5ceb8c1246a-varrun\") pod \"csi-node-driver-789m7\" (UID: \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\") " pod="calico-system/csi-node-driver-789m7" May 17 00:21:27.142389 kubelet[2458]: E0517 00:21:27.142372 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.142486 kubelet[2458]: W0517 00:21:27.142470 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.142603 kubelet[2458]: E0517 00:21:27.142554 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.143840 kubelet[2458]: E0517 00:21:27.143818 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.144112 kubelet[2458]: W0517 00:21:27.144090 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.144566 kubelet[2458]: E0517 00:21:27.144541 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.146244 kubelet[2458]: E0517 00:21:27.146203 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.146415 kubelet[2458]: W0517 00:21:27.146383 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.146497 kubelet[2458]: E0517 00:21:27.146485 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.147242 kubelet[2458]: E0517 00:21:27.147222 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.147499 kubelet[2458]: W0517 00:21:27.147424 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.147499 kubelet[2458]: E0517 00:21:27.147449 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.179223 containerd[1459]: time="2025-05-17T00:21:27.178905034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:27.179427 containerd[1459]: time="2025-05-17T00:21:27.179050908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:27.179427 containerd[1459]: time="2025-05-17T00:21:27.179065720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:27.179427 containerd[1459]: time="2025-05-17T00:21:27.179175195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:27.216871 systemd[1]: Started cri-containerd-1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314.scope - libcontainer container 1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314. May 17 00:21:27.244838 kubelet[2458]: E0517 00:21:27.244551 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.244838 kubelet[2458]: W0517 00:21:27.244690 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.245537 kubelet[2458]: E0517 00:21:27.244720 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.247062 kubelet[2458]: E0517 00:21:27.246871 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.247062 kubelet[2458]: W0517 00:21:27.246897 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.248129 kubelet[2458]: E0517 00:21:27.246997 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.249309 kubelet[2458]: E0517 00:21:27.249024 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.249309 kubelet[2458]: W0517 00:21:27.249051 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.249309 kubelet[2458]: E0517 00:21:27.249082 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.250874 kubelet[2458]: E0517 00:21:27.250649 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.250874 kubelet[2458]: W0517 00:21:27.250722 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.250874 kubelet[2458]: E0517 00:21:27.250748 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.254356 kubelet[2458]: E0517 00:21:27.254066 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.254356 kubelet[2458]: W0517 00:21:27.254096 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.254356 kubelet[2458]: E0517 00:21:27.254129 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.255116 kubelet[2458]: E0517 00:21:27.254803 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.255116 kubelet[2458]: W0517 00:21:27.254817 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.255116 kubelet[2458]: E0517 00:21:27.254835 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.256867 kubelet[2458]: E0517 00:21:27.256032 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.256867 kubelet[2458]: W0517 00:21:27.256052 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.256867 kubelet[2458]: E0517 00:21:27.256073 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.257726 kubelet[2458]: E0517 00:21:27.257511 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.257726 kubelet[2458]: W0517 00:21:27.257565 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.257726 kubelet[2458]: E0517 00:21:27.257583 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.258178 kubelet[2458]: E0517 00:21:27.258156 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.258404 kubelet[2458]: W0517 00:21:27.258325 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.258404 kubelet[2458]: E0517 00:21:27.258346 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.259163 kubelet[2458]: E0517 00:21:27.259147 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.259580 kubelet[2458]: W0517 00:21:27.259358 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.259580 kubelet[2458]: E0517 00:21:27.259380 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.260831 kubelet[2458]: E0517 00:21:27.260442 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.260831 kubelet[2458]: W0517 00:21:27.260456 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.260831 kubelet[2458]: E0517 00:21:27.260564 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.262697 kubelet[2458]: E0517 00:21:27.261442 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.262697 kubelet[2458]: W0517 00:21:27.261460 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.262697 kubelet[2458]: E0517 00:21:27.261473 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.264144 kubelet[2458]: E0517 00:21:27.263827 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.264144 kubelet[2458]: W0517 00:21:27.263949 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.264144 kubelet[2458]: E0517 00:21:27.263981 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.265779 kubelet[2458]: E0517 00:21:27.265355 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.265779 kubelet[2458]: W0517 00:21:27.265377 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.265779 kubelet[2458]: E0517 00:21:27.265402 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.268036 kubelet[2458]: E0517 00:21:27.267780 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.268036 kubelet[2458]: W0517 00:21:27.267809 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.268036 kubelet[2458]: E0517 00:21:27.267834 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.269078 containerd[1459]: time="2025-05-17T00:21:27.268344850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5b45f66c-m8v47,Uid:a4677331-24af-49c7-bd02-e4a60aa198b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b\"" May 17 00:21:27.270912 kubelet[2458]: E0517 00:21:27.270129 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.270912 kubelet[2458]: W0517 00:21:27.270151 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.270912 kubelet[2458]: E0517 00:21:27.270179 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.272074 kubelet[2458]: E0517 00:21:27.271718 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.272074 kubelet[2458]: W0517 00:21:27.271740 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.272074 kubelet[2458]: E0517 00:21:27.271760 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.274330 kubelet[2458]: E0517 00:21:27.274071 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:27.276324 kubelet[2458]: E0517 00:21:27.275507 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.276324 kubelet[2458]: W0517 00:21:27.275544 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.276324 kubelet[2458]: E0517 00:21:27.275572 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.278629 containerd[1459]: time="2025-05-17T00:21:27.278413890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:21:27.279671 kubelet[2458]: E0517 00:21:27.279250 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.279671 kubelet[2458]: W0517 00:21:27.279270 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.279671 kubelet[2458]: E0517 00:21:27.279461 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.283792 kubelet[2458]: E0517 00:21:27.282724 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.283792 kubelet[2458]: W0517 00:21:27.282746 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.283792 kubelet[2458]: E0517 00:21:27.282769 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.285139 kubelet[2458]: E0517 00:21:27.284852 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.285139 kubelet[2458]: W0517 00:21:27.284874 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.285139 kubelet[2458]: E0517 00:21:27.284928 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.288037 kubelet[2458]: E0517 00:21:27.287789 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.288037 kubelet[2458]: W0517 00:21:27.287815 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.288037 kubelet[2458]: E0517 00:21:27.287845 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.289261 kubelet[2458]: E0517 00:21:27.289034 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.289261 kubelet[2458]: W0517 00:21:27.289067 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.289261 kubelet[2458]: E0517 00:21:27.289088 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.289842 kubelet[2458]: E0517 00:21:27.289798 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.289842 kubelet[2458]: W0517 00:21:27.289815 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.290114 kubelet[2458]: E0517 00:21:27.289960 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.291005 kubelet[2458]: E0517 00:21:27.290928 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.291287 kubelet[2458]: W0517 00:21:27.291143 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.291287 kubelet[2458]: E0517 00:21:27.291212 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.310179 kubelet[2458]: E0517 00:21:27.310118 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:27.310179 kubelet[2458]: W0517 00:21:27.310146 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:27.310458 kubelet[2458]: E0517 00:21:27.310274 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:27.377382 containerd[1459]: time="2025-05-17T00:21:27.375380339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4skj9,Uid:7804134d-f3ff-4eb7-8c0e-951ae52af78d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\"" May 17 00:21:27.731285 systemd[1]: run-containerd-runc-k8s.io-5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b-runc.jJ72bl.mount: Deactivated successfully. May 17 00:21:28.901779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310179758.mount: Deactivated successfully. May 17 00:21:28.976717 kubelet[2458]: E0517 00:21:28.973051 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:29.778825 containerd[1459]: time="2025-05-17T00:21:29.778545121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.780494 containerd[1459]: time="2025-05-17T00:21:29.780415599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:21:29.781627 containerd[1459]: time="2025-05-17T00:21:29.781570806Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.785825 containerd[1459]: time="2025-05-17T00:21:29.785666502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.787730 containerd[1459]: time="2025-05-17T00:21:29.787307769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.508722584s" May 17 00:21:29.787730 containerd[1459]: time="2025-05-17T00:21:29.787374271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:21:29.791203 containerd[1459]: time="2025-05-17T00:21:29.789936330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:21:29.814327 containerd[1459]: time="2025-05-17T00:21:29.814216550Z" level=info msg="CreateContainer within sandbox \"5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:21:29.834169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688128961.mount: Deactivated successfully. May 17 00:21:29.836127 containerd[1459]: time="2025-05-17T00:21:29.835855737Z" level=info msg="CreateContainer within sandbox \"5506e3a335e13a3a7f6cf4b32b85968aaf750762656374996dc06c9e3907838b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a1aa818541974e9266804dcf8c6ba98c77c48ca1ecb0e13622e145288bf139f7\"" May 17 00:21:29.841084 containerd[1459]: time="2025-05-17T00:21:29.840655629Z" level=info msg="StartContainer for \"a1aa818541974e9266804dcf8c6ba98c77c48ca1ecb0e13622e145288bf139f7\"" May 17 00:21:29.934934 systemd[1]: Started cri-containerd-a1aa818541974e9266804dcf8c6ba98c77c48ca1ecb0e13622e145288bf139f7.scope - libcontainer container a1aa818541974e9266804dcf8c6ba98c77c48ca1ecb0e13622e145288bf139f7. May 17 00:21:30.018219 containerd[1459]: time="2025-05-17T00:21:30.017953239Z" level=info msg="StartContainer for \"a1aa818541974e9266804dcf8c6ba98c77c48ca1ecb0e13622e145288bf139f7\" returns successfully" May 17 00:21:30.128767 kubelet[2458]: E0517 00:21:30.128592 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:30.149015 kubelet[2458]: E0517 00:21:30.148967 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.149015 kubelet[2458]: W0517 00:21:30.149001 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.149015 kubelet[2458]: E0517 00:21:30.149032 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.150081 kubelet[2458]: E0517 00:21:30.149354 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.150081 kubelet[2458]: W0517 00:21:30.149389 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.150081 kubelet[2458]: E0517 00:21:30.149406 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.150081 kubelet[2458]: E0517 00:21:30.149790 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.150081 kubelet[2458]: W0517 00:21:30.149804 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.150081 kubelet[2458]: E0517 00:21:30.149820 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.150915 kubelet[2458]: E0517 00:21:30.150869 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.150915 kubelet[2458]: W0517 00:21:30.150891 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.150915 kubelet[2458]: E0517 00:21:30.150912 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151243 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.151997 kubelet[2458]: W0517 00:21:30.151257 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151272 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151503 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.151997 kubelet[2458]: W0517 00:21:30.151514 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151549 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151774 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.151997 kubelet[2458]: W0517 00:21:30.151784 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.151997 kubelet[2458]: E0517 00:21:30.151795 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.152090 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.154908 kubelet[2458]: W0517 00:21:30.152128 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.152143 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.152863 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.154908 kubelet[2458]: W0517 00:21:30.152879 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.152896 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.153162 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.154908 kubelet[2458]: W0517 00:21:30.153181 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.153193 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.154908 kubelet[2458]: E0517 00:21:30.154330 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.155214 kubelet[2458]: W0517 00:21:30.154350 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.155214 kubelet[2458]: E0517 00:21:30.154370 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.155214 kubelet[2458]: E0517 00:21:30.154711 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.155214 kubelet[2458]: W0517 00:21:30.154723 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.155214 kubelet[2458]: E0517 00:21:30.154738 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.155214 kubelet[2458]: E0517 00:21:30.154980 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.155214 kubelet[2458]: W0517 00:21:30.154991 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.155214 kubelet[2458]: E0517 00:21:30.155007 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.157673 kubelet[2458]: E0517 00:21:30.155252 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.157673 kubelet[2458]: W0517 00:21:30.155263 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.157673 kubelet[2458]: E0517 00:21:30.155276 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.157673 kubelet[2458]: E0517 00:21:30.155564 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.157673 kubelet[2458]: W0517 00:21:30.155577 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.157673 kubelet[2458]: E0517 00:21:30.155592 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.191244 kubelet[2458]: E0517 00:21:30.191103 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.191244 kubelet[2458]: W0517 00:21:30.191131 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.191244 kubelet[2458]: E0517 00:21:30.191160 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.193927 kubelet[2458]: E0517 00:21:30.193379 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.193927 kubelet[2458]: W0517 00:21:30.193408 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.193927 kubelet[2458]: E0517 00:21:30.193439 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.195063 kubelet[2458]: E0517 00:21:30.194974 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.195063 kubelet[2458]: W0517 00:21:30.195002 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.195063 kubelet[2458]: E0517 00:21:30.195029 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.197105 kubelet[2458]: E0517 00:21:30.196649 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.197105 kubelet[2458]: W0517 00:21:30.196676 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.197105 kubelet[2458]: E0517 00:21:30.196703 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.199707 kubelet[2458]: E0517 00:21:30.199107 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.199707 kubelet[2458]: W0517 00:21:30.199144 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.199707 kubelet[2458]: E0517 00:21:30.199176 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.199707 kubelet[2458]: E0517 00:21:30.199503 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.199707 kubelet[2458]: W0517 00:21:30.199538 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.199707 kubelet[2458]: E0517 00:21:30.199556 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.200152 kubelet[2458]: E0517 00:21:30.199783 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.200152 kubelet[2458]: W0517 00:21:30.199793 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.200152 kubelet[2458]: E0517 00:21:30.199806 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.200152 kubelet[2458]: E0517 00:21:30.200067 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.200152 kubelet[2458]: W0517 00:21:30.200079 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.200152 kubelet[2458]: E0517 00:21:30.200091 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.200409 kubelet[2458]: E0517 00:21:30.200371 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.200409 kubelet[2458]: W0517 00:21:30.200383 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.200409 kubelet[2458]: E0517 00:21:30.200396 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.204086 kubelet[2458]: E0517 00:21:30.204029 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.204086 kubelet[2458]: W0517 00:21:30.204065 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.204086 kubelet[2458]: E0517 00:21:30.204097 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.205573 kubelet[2458]: E0517 00:21:30.204817 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.205573 kubelet[2458]: W0517 00:21:30.204837 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.205573 kubelet[2458]: E0517 00:21:30.204865 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.205573 kubelet[2458]: E0517 00:21:30.205232 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.205573 kubelet[2458]: W0517 00:21:30.205246 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.205573 kubelet[2458]: E0517 00:21:30.205263 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.205606 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.207255 kubelet[2458]: W0517 00:21:30.205619 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.205634 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.205946 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.207255 kubelet[2458]: W0517 00:21:30.205958 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.205973 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.206314 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.207255 kubelet[2458]: W0517 00:21:30.206328 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.207255 kubelet[2458]: E0517 00:21:30.206343 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.209398 kubelet[2458]: E0517 00:21:30.208839 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.209398 kubelet[2458]: W0517 00:21:30.208868 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.209398 kubelet[2458]: E0517 00:21:30.208898 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.209398 kubelet[2458]: E0517 00:21:30.209310 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.209398 kubelet[2458]: W0517 00:21:30.209325 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.209398 kubelet[2458]: E0517 00:21:30.209344 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.211066 kubelet[2458]: E0517 00:21:30.209974 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:30.211066 kubelet[2458]: W0517 00:21:30.209992 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:30.211066 kubelet[2458]: E0517 00:21:30.210008 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:30.975009 kubelet[2458]: E0517 00:21:30.974852 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:31.131212 kubelet[2458]: I0517 00:21:31.130938 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:31.133704 kubelet[2458]: E0517 00:21:31.133200 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:31.165246 kubelet[2458]: E0517 00:21:31.164722 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.165246 kubelet[2458]: W0517 00:21:31.164766 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.165246 kubelet[2458]: E0517 00:21:31.164919 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.167634 kubelet[2458]: E0517 00:21:31.167062 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.167634 kubelet[2458]: W0517 00:21:31.167100 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.167634 kubelet[2458]: E0517 00:21:31.167148 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.169321 kubelet[2458]: E0517 00:21:31.169065 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.169321 kubelet[2458]: W0517 00:21:31.169119 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.169321 kubelet[2458]: E0517 00:21:31.169154 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.170227 kubelet[2458]: E0517 00:21:31.170092 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.170556 kubelet[2458]: W0517 00:21:31.170122 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.170556 kubelet[2458]: E0517 00:21:31.170372 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.171762 kubelet[2458]: E0517 00:21:31.171473 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.171762 kubelet[2458]: W0517 00:21:31.171642 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.171762 kubelet[2458]: E0517 00:21:31.171674 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.173145 kubelet[2458]: E0517 00:21:31.172766 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.173145 kubelet[2458]: W0517 00:21:31.172792 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.173145 kubelet[2458]: E0517 00:21:31.172827 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.174080 kubelet[2458]: E0517 00:21:31.173787 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.174080 kubelet[2458]: W0517 00:21:31.173809 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.174080 kubelet[2458]: E0517 00:21:31.173974 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.175083 kubelet[2458]: E0517 00:21:31.175022 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.175503 kubelet[2458]: W0517 00:21:31.175046 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.175503 kubelet[2458]: E0517 00:21:31.175310 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.176291 kubelet[2458]: E0517 00:21:31.176145 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.176291 kubelet[2458]: W0517 00:21:31.176185 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.176291 kubelet[2458]: E0517 00:21:31.176212 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.177552 kubelet[2458]: E0517 00:21:31.177146 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.177552 kubelet[2458]: W0517 00:21:31.177170 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.177552 kubelet[2458]: E0517 00:21:31.177194 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.178727 kubelet[2458]: E0517 00:21:31.177935 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.178727 kubelet[2458]: W0517 00:21:31.177955 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.178727 kubelet[2458]: E0517 00:21:31.177977 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.179486 kubelet[2458]: E0517 00:21:31.179335 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.179820 kubelet[2458]: W0517 00:21:31.179668 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.179820 kubelet[2458]: E0517 00:21:31.179709 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.181124 kubelet[2458]: E0517 00:21:31.180601 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.181124 kubelet[2458]: W0517 00:21:31.180704 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.181124 kubelet[2458]: E0517 00:21:31.180732 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.181938 kubelet[2458]: E0517 00:21:31.181911 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.182174 kubelet[2458]: W0517 00:21:31.182054 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.182174 kubelet[2458]: E0517 00:21:31.182086 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.183446 kubelet[2458]: E0517 00:21:31.183089 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.183446 kubelet[2458]: W0517 00:21:31.183112 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.183446 kubelet[2458]: E0517 00:21:31.183138 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.204173 kubelet[2458]: E0517 00:21:31.204123 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.204173 kubelet[2458]: W0517 00:21:31.204158 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.204173 kubelet[2458]: E0517 00:21:31.204188 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.204803 kubelet[2458]: E0517 00:21:31.204627 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.204803 kubelet[2458]: W0517 00:21:31.204662 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.204803 kubelet[2458]: E0517 00:21:31.204679 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.205205 kubelet[2458]: E0517 00:21:31.205182 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.205205 kubelet[2458]: W0517 00:21:31.205202 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.205354 kubelet[2458]: E0517 00:21:31.205220 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.205737 kubelet[2458]: E0517 00:21:31.205716 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.205737 kubelet[2458]: W0517 00:21:31.205735 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.206126 kubelet[2458]: E0517 00:21:31.205756 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.206126 kubelet[2458]: E0517 00:21:31.206108 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.206126 kubelet[2458]: W0517 00:21:31.206119 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.206287 kubelet[2458]: E0517 00:21:31.206131 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.207037 kubelet[2458]: E0517 00:21:31.207012 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.207037 kubelet[2458]: W0517 00:21:31.207033 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.207493 kubelet[2458]: E0517 00:21:31.207052 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.207579 kubelet[2458]: E0517 00:21:31.207501 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.208467 kubelet[2458]: W0517 00:21:31.208411 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.208467 kubelet[2458]: E0517 00:21:31.208453 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.209183 kubelet[2458]: E0517 00:21:31.209141 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.209183 kubelet[2458]: W0517 00:21:31.209162 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.209183 kubelet[2458]: E0517 00:21:31.209180 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.210181 kubelet[2458]: E0517 00:21:31.210141 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.210181 kubelet[2458]: W0517 00:21:31.210160 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.210419 kubelet[2458]: E0517 00:21:31.210375 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.211813 kubelet[2458]: E0517 00:21:31.211779 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.211813 kubelet[2458]: W0517 00:21:31.211803 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.212410 kubelet[2458]: E0517 00:21:31.211827 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.212953 kubelet[2458]: E0517 00:21:31.212722 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.212953 kubelet[2458]: W0517 00:21:31.212748 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.212953 kubelet[2458]: E0517 00:21:31.212784 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.213943 kubelet[2458]: E0517 00:21:31.213725 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.213943 kubelet[2458]: W0517 00:21:31.213749 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.213943 kubelet[2458]: E0517 00:21:31.213770 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.214831 kubelet[2458]: E0517 00:21:31.214589 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.214831 kubelet[2458]: W0517 00:21:31.214614 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.214831 kubelet[2458]: E0517 00:21:31.214636 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.215628 kubelet[2458]: E0517 00:21:31.215360 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.215628 kubelet[2458]: W0517 00:21:31.215382 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.215628 kubelet[2458]: E0517 00:21:31.215408 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.216479 containerd[1459]: time="2025-05-17T00:21:31.215806344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:31.217077 kubelet[2458]: E0517 00:21:31.216194 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.217077 kubelet[2458]: W0517 00:21:31.216212 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.217077 kubelet[2458]: E0517 00:21:31.216233 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.218588 kubelet[2458]: E0517 00:21:31.217945 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.218588 kubelet[2458]: W0517 00:21:31.217981 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.218588 kubelet[2458]: E0517 00:21:31.218009 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.218864 kubelet[2458]: E0517 00:21:31.218851 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.218918 kubelet[2458]: W0517 00:21:31.218870 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.218918 kubelet[2458]: E0517 00:21:31.218891 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.220161 containerd[1459]: time="2025-05-17T00:21:31.220075579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:21:31.221454 containerd[1459]: time="2025-05-17T00:21:31.221404238Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:31.230732 containerd[1459]: time="2025-05-17T00:21:31.229239377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:31.231780 containerd[1459]: time="2025-05-17T00:21:31.230148850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.440164956s" May 17 00:21:31.231780 containerd[1459]: time="2025-05-17T00:21:31.231691419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:21:31.232851 kubelet[2458]: E0517 00:21:31.232803 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:31.232962 kubelet[2458]: W0517 00:21:31.232843 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:31.232962 kubelet[2458]: E0517 00:21:31.232898 2458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:31.248489 containerd[1459]: time="2025-05-17T00:21:31.248210869Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:21:31.276550 containerd[1459]: time="2025-05-17T00:21:31.275485697Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76\"" May 17 00:21:31.278955 containerd[1459]: time="2025-05-17T00:21:31.278831810Z" level=info msg="StartContainer for \"1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76\"" May 17 00:21:31.280918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938015507.mount: Deactivated successfully. May 17 00:21:31.368395 systemd[1]: run-containerd-runc-k8s.io-1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76-runc.fGpjZI.mount: Deactivated successfully. May 17 00:21:31.380256 systemd[1]: Started cri-containerd-1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76.scope - libcontainer container 1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76. May 17 00:21:31.433350 containerd[1459]: time="2025-05-17T00:21:31.433198915Z" level=info msg="StartContainer for \"1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76\" returns successfully" May 17 00:21:31.461789 systemd[1]: cri-containerd-1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76.scope: Deactivated successfully. May 17 00:21:31.578500 containerd[1459]: time="2025-05-17T00:21:31.536556497Z" level=info msg="shim disconnected" id=1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76 namespace=k8s.io May 17 00:21:31.578500 containerd[1459]: time="2025-05-17T00:21:31.578351797Z" level=warning msg="cleaning up after shim disconnected" id=1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76 namespace=k8s.io May 17 00:21:31.578500 containerd[1459]: time="2025-05-17T00:21:31.578379801Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:32.136871 containerd[1459]: time="2025-05-17T00:21:32.136805428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:21:32.157901 kubelet[2458]: I0517 00:21:32.157376 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b5b45f66c-m8v47" podStartSLOduration=3.646400113 podStartE2EDuration="6.157352175s" podCreationTimestamp="2025-05-17 00:21:26 +0000 UTC" firstStartedPulling="2025-05-17 00:21:27.277978757 +0000 UTC m=+20.510261534" lastFinishedPulling="2025-05-17 00:21:29.788930833 +0000 UTC m=+23.021213596" observedRunningTime="2025-05-17 00:21:30.156828757 +0000 UTC m=+23.389111545" watchObservedRunningTime="2025-05-17 00:21:32.157352175 +0000 UTC m=+25.389634961" May 17 00:21:32.270503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1654e8a63cf192397c37597ed6c818586f197659bb6584012f72d34cdfe38f76-rootfs.mount: Deactivated successfully. May 17 00:21:32.973875 kubelet[2458]: E0517 00:21:32.972732 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:34.971504 kubelet[2458]: E0517 00:21:34.971410 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:35.156440 containerd[1459]: time="2025-05-17T00:21:35.155683812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.159117 containerd[1459]: time="2025-05-17T00:21:35.158614663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:21:35.159560 containerd[1459]: time="2025-05-17T00:21:35.159502114Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.162055 containerd[1459]: time="2025-05-17T00:21:35.161998624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.162936 containerd[1459]: time="2025-05-17T00:21:35.162897065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.026050666s" May 17 00:21:35.162936 containerd[1459]: time="2025-05-17T00:21:35.162934720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:21:35.169917 containerd[1459]: time="2025-05-17T00:21:35.169856274Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:21:35.195157 containerd[1459]: time="2025-05-17T00:21:35.195094496Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c\"" May 17 00:21:35.196385 containerd[1459]: time="2025-05-17T00:21:35.195948420Z" level=info msg="StartContainer for \"4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c\"" May 17 00:21:35.264891 systemd[1]: Started cri-containerd-4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c.scope - libcontainer container 4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c. May 17 00:21:35.309108 containerd[1459]: time="2025-05-17T00:21:35.309012442Z" level=info msg="StartContainer for \"4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c\" returns successfully" May 17 00:21:35.801263 kubelet[2458]: I0517 00:21:35.800090 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:35.802109 kubelet[2458]: E0517 00:21:35.801925 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:36.005717 systemd[1]: cri-containerd-4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c.scope: Deactivated successfully. May 17 00:21:36.043200 kubelet[2458]: I0517 00:21:36.042574 2458 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:21:36.088426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c-rootfs.mount: Deactivated successfully. May 17 00:21:36.093376 containerd[1459]: time="2025-05-17T00:21:36.090053023Z" level=info msg="shim disconnected" id=4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c namespace=k8s.io May 17 00:21:36.093376 containerd[1459]: time="2025-05-17T00:21:36.090852245Z" level=warning msg="cleaning up after shim disconnected" id=4715ddf666b778528bb57862b9d4f57e3caa06bbae92d4e938538a4a3111322c namespace=k8s.io May 17 00:21:36.093376 containerd[1459]: time="2025-05-17T00:21:36.090876385Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:36.142586 systemd[1]: Created slice kubepods-besteffort-pod853d6b01_d262_432a_a248_addafbd3367b.slice - libcontainer container kubepods-besteffort-pod853d6b01_d262_432a_a248_addafbd3367b.slice. May 17 00:21:36.149628 kubelet[2458]: I0517 00:21:36.149579 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr87n\" (UniqueName: \"kubernetes.io/projected/853d6b01-d262-432a-a248-addafbd3367b-kube-api-access-zr87n\") pod \"whisker-64f4887fbc-jcxt9\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " pod="calico-system/whisker-64f4887fbc-jcxt9" May 17 00:21:36.150541 kubelet[2458]: I0517 00:21:36.149953 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr99l\" (UniqueName: \"kubernetes.io/projected/d63156b5-315e-4424-8e51-f55b1ec001db-kube-api-access-sr99l\") pod \"coredns-674b8bbfcf-n9mg8\" (UID: \"d63156b5-315e-4424-8e51-f55b1ec001db\") " pod="kube-system/coredns-674b8bbfcf-n9mg8" May 17 00:21:36.150541 kubelet[2458]: I0517 00:21:36.149996 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d63156b5-315e-4424-8e51-f55b1ec001db-config-volume\") pod \"coredns-674b8bbfcf-n9mg8\" (UID: \"d63156b5-315e-4424-8e51-f55b1ec001db\") " pod="kube-system/coredns-674b8bbfcf-n9mg8" May 17 00:21:36.150541 kubelet[2458]: I0517 00:21:36.150029 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/853d6b01-d262-432a-a248-addafbd3367b-whisker-backend-key-pair\") pod \"whisker-64f4887fbc-jcxt9\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " pod="calico-system/whisker-64f4887fbc-jcxt9" May 17 00:21:36.150541 kubelet[2458]: I0517 00:21:36.150055 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853d6b01-d262-432a-a248-addafbd3367b-whisker-ca-bundle\") pod \"whisker-64f4887fbc-jcxt9\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " pod="calico-system/whisker-64f4887fbc-jcxt9" May 17 00:21:36.178176 systemd[1]: Created slice kubepods-burstable-podd63156b5_315e_4424_8e51_f55b1ec001db.slice - libcontainer container kubepods-burstable-podd63156b5_315e_4424_8e51_f55b1ec001db.slice. May 17 00:21:36.181929 containerd[1459]: time="2025-05-17T00:21:36.181867750Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:21:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:21:36.210540 kubelet[2458]: E0517 00:21:36.208363 2458 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-n-6deca81674\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-n-6deca81674' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" May 17 00:21:36.224782 kubelet[2458]: E0517 00:21:36.224751 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:36.225960 systemd[1]: Created slice kubepods-besteffort-pod26a01dd2_ca4a_48d8_b676_505f65a04723.slice - libcontainer container kubepods-besteffort-pod26a01dd2_ca4a_48d8_b676_505f65a04723.slice. May 17 00:21:36.245081 systemd[1]: Created slice kubepods-besteffort-pod877588de_b588_49a3_a0b6_58a44269c024.slice - libcontainer container kubepods-besteffort-pod877588de_b588_49a3_a0b6_58a44269c024.slice. May 17 00:21:36.251304 kubelet[2458]: I0517 00:21:36.251170 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/877588de-b588-49a3-a0b6-58a44269c024-calico-apiserver-certs\") pod \"calico-apiserver-5759bc5bd9-g9kg6\" (UID: \"877588de-b588-49a3-a0b6-58a44269c024\") " pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" May 17 00:21:36.251304 kubelet[2458]: I0517 00:21:36.251244 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzl4z\" (UniqueName: \"kubernetes.io/projected/877588de-b588-49a3-a0b6-58a44269c024-kube-api-access-kzl4z\") pod \"calico-apiserver-5759bc5bd9-g9kg6\" (UID: \"877588de-b588-49a3-a0b6-58a44269c024\") " pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" May 17 00:21:36.253175 kubelet[2458]: I0517 00:21:36.251277 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8567c791-2cba-41d1-9dcc-a920a450b5ec-config-volume\") pod \"coredns-674b8bbfcf-k8sjh\" (UID: \"8567c791-2cba-41d1-9dcc-a920a450b5ec\") " pod="kube-system/coredns-674b8bbfcf-k8sjh" May 17 00:21:36.253175 kubelet[2458]: I0517 00:21:36.251694 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c8e662ac-310d-4631-a4cc-a86f8c336b26-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-l9z28\" (UID: \"c8e662ac-310d-4631-a4cc-a86f8c336b26\") " pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.253175 kubelet[2458]: I0517 00:21:36.251738 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/535f48a8-c7ba-4110-afb4-e41a21f02377-tigera-ca-bundle\") pod \"calico-kube-controllers-7f8bd85b9-zb9lw\" (UID: \"535f48a8-c7ba-4110-afb4-e41a21f02377\") " pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" May 17 00:21:36.253175 kubelet[2458]: I0517 00:21:36.251772 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klp2d\" (UniqueName: \"kubernetes.io/projected/26a01dd2-ca4a-48d8-b676-505f65a04723-kube-api-access-klp2d\") pod \"calico-apiserver-5759bc5bd9-rnppq\" (UID: \"26a01dd2-ca4a-48d8-b676-505f65a04723\") " pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" May 17 00:21:36.253853 kubelet[2458]: I0517 00:21:36.253661 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtm9p\" (UniqueName: \"kubernetes.io/projected/8567c791-2cba-41d1-9dcc-a920a450b5ec-kube-api-access-jtm9p\") pod \"coredns-674b8bbfcf-k8sjh\" (UID: \"8567c791-2cba-41d1-9dcc-a920a450b5ec\") " pod="kube-system/coredns-674b8bbfcf-k8sjh" May 17 00:21:36.253853 kubelet[2458]: I0517 00:21:36.253717 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e662ac-310d-4631-a4cc-a86f8c336b26-config\") pod \"goldmane-78d55f7ddc-l9z28\" (UID: \"c8e662ac-310d-4631-a4cc-a86f8c336b26\") " pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.253853 kubelet[2458]: I0517 00:21:36.253742 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8e662ac-310d-4631-a4cc-a86f8c336b26-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-l9z28\" (UID: \"c8e662ac-310d-4631-a4cc-a86f8c336b26\") " pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.255337 kubelet[2458]: I0517 00:21:36.254689 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6t2\" (UniqueName: \"kubernetes.io/projected/c8e662ac-310d-4631-a4cc-a86f8c336b26-kube-api-access-lc6t2\") pod \"goldmane-78d55f7ddc-l9z28\" (UID: \"c8e662ac-310d-4631-a4cc-a86f8c336b26\") " pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.255337 kubelet[2458]: I0517 00:21:36.255255 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfxt\" (UniqueName: \"kubernetes.io/projected/535f48a8-c7ba-4110-afb4-e41a21f02377-kube-api-access-njfxt\") pod \"calico-kube-controllers-7f8bd85b9-zb9lw\" (UID: \"535f48a8-c7ba-4110-afb4-e41a21f02377\") " pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" May 17 00:21:36.255803 kubelet[2458]: I0517 00:21:36.255312 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26a01dd2-ca4a-48d8-b676-505f65a04723-calico-apiserver-certs\") pod \"calico-apiserver-5759bc5bd9-rnppq\" (UID: \"26a01dd2-ca4a-48d8-b676-505f65a04723\") " pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" May 17 00:21:36.314801 systemd[1]: Created slice kubepods-besteffort-pod535f48a8_c7ba_4110_afb4_e41a21f02377.slice - libcontainer container kubepods-besteffort-pod535f48a8_c7ba_4110_afb4_e41a21f02377.slice. May 17 00:21:36.367093 systemd[1]: Created slice kubepods-burstable-pod8567c791_2cba_41d1_9dcc_a920a450b5ec.slice - libcontainer container kubepods-burstable-pod8567c791_2cba_41d1_9dcc_a920a450b5ec.slice. May 17 00:21:36.414441 systemd[1]: Created slice kubepods-besteffort-podc8e662ac_310d_4631_a4cc_a86f8c336b26.slice - libcontainer container kubepods-besteffort-podc8e662ac_310d_4631_a4cc_a86f8c336b26.slice. May 17 00:21:36.426798 containerd[1459]: time="2025-05-17T00:21:36.426350427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l9z28,Uid:c8e662ac-310d-4631-a4cc-a86f8c336b26,Namespace:calico-system,Attempt:0,}" May 17 00:21:36.471745 containerd[1459]: time="2025-05-17T00:21:36.471028869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f4887fbc-jcxt9,Uid:853d6b01-d262-432a-a248-addafbd3367b,Namespace:calico-system,Attempt:0,}" May 17 00:21:36.529226 kubelet[2458]: E0517 00:21:36.528623 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:36.531950 containerd[1459]: time="2025-05-17T00:21:36.531883309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9mg8,Uid:d63156b5-315e-4424-8e51-f55b1ec001db,Namespace:kube-system,Attempt:0,}" May 17 00:21:36.648698 containerd[1459]: time="2025-05-17T00:21:36.645363827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8bd85b9-zb9lw,Uid:535f48a8-c7ba-4110-afb4-e41a21f02377,Namespace:calico-system,Attempt:0,}" May 17 00:21:36.712653 kubelet[2458]: E0517 00:21:36.712264 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:36.716043 containerd[1459]: time="2025-05-17T00:21:36.715987870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8sjh,Uid:8567c791-2cba-41d1-9dcc-a920a450b5ec,Namespace:kube-system,Attempt:0,}" May 17 00:21:36.808369 containerd[1459]: time="2025-05-17T00:21:36.808289734Z" level=error msg="Failed to destroy network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.814046 containerd[1459]: time="2025-05-17T00:21:36.813803615Z" level=error msg="encountered an error cleaning up failed sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.814628 containerd[1459]: time="2025-05-17T00:21:36.814309178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f4887fbc-jcxt9,Uid:853d6b01-d262-432a-a248-addafbd3367b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.822493 containerd[1459]: time="2025-05-17T00:21:36.822308739Z" level=error msg="Failed to destroy network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.823845 kubelet[2458]: E0517 00:21:36.823641 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.823845 kubelet[2458]: E0517 00:21:36.823756 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64f4887fbc-jcxt9" May 17 00:21:36.823845 kubelet[2458]: E0517 00:21:36.823788 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64f4887fbc-jcxt9" May 17 00:21:36.824103 kubelet[2458]: E0517 00:21:36.823885 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64f4887fbc-jcxt9_calico-system(853d6b01-d262-432a-a248-addafbd3367b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64f4887fbc-jcxt9_calico-system(853d6b01-d262-432a-a248-addafbd3367b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64f4887fbc-jcxt9" podUID="853d6b01-d262-432a-a248-addafbd3367b" May 17 00:21:36.826208 containerd[1459]: time="2025-05-17T00:21:36.824587464Z" level=error msg="encountered an error cleaning up failed sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.826597 containerd[1459]: time="2025-05-17T00:21:36.825639723Z" level=error msg="Failed to destroy network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.826978 containerd[1459]: time="2025-05-17T00:21:36.826936683Z" level=error msg="encountered an error cleaning up failed sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.827061 containerd[1459]: time="2025-05-17T00:21:36.826999359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9mg8,Uid:d63156b5-315e-4424-8e51-f55b1ec001db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.827123 containerd[1459]: time="2025-05-17T00:21:36.826441745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l9z28,Uid:c8e662ac-310d-4631-a4cc-a86f8c336b26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.827797 kubelet[2458]: E0517 00:21:36.827731 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.827907 kubelet[2458]: E0517 00:21:36.827797 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.827907 kubelet[2458]: E0517 00:21:36.827825 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-l9z28" May 17 00:21:36.827907 kubelet[2458]: E0517 00:21:36.827880 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-l9z28_calico-system(c8e662ac-310d-4631-a4cc-a86f8c336b26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-l9z28_calico-system(c8e662ac-310d-4631-a4cc-a86f8c336b26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:21:36.828081 kubelet[2458]: E0517 00:21:36.828023 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.828081 kubelet[2458]: E0517 00:21:36.828059 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n9mg8" May 17 00:21:36.828081 kubelet[2458]: E0517 00:21:36.828075 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n9mg8" May 17 00:21:36.828170 kubelet[2458]: E0517 00:21:36.828125 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n9mg8_kube-system(d63156b5-315e-4424-8e51-f55b1ec001db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n9mg8_kube-system(d63156b5-315e-4424-8e51-f55b1ec001db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n9mg8" podUID="d63156b5-315e-4424-8e51-f55b1ec001db" May 17 00:21:36.890298 containerd[1459]: time="2025-05-17T00:21:36.889827547Z" level=error msg="Failed to destroy network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.890748 containerd[1459]: time="2025-05-17T00:21:36.890707402Z" level=error msg="encountered an error cleaning up failed sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.890843 containerd[1459]: time="2025-05-17T00:21:36.890779953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8bd85b9-zb9lw,Uid:535f48a8-c7ba-4110-afb4-e41a21f02377,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.891108 kubelet[2458]: E0517 00:21:36.891060 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.891165 kubelet[2458]: E0517 00:21:36.891133 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" May 17 00:21:36.891201 kubelet[2458]: E0517 00:21:36.891161 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" May 17 00:21:36.891245 kubelet[2458]: E0517 00:21:36.891220 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f8bd85b9-zb9lw_calico-system(535f48a8-c7ba-4110-afb4-e41a21f02377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f8bd85b9-zb9lw_calico-system(535f48a8-c7ba-4110-afb4-e41a21f02377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" podUID="535f48a8-c7ba-4110-afb4-e41a21f02377" May 17 00:21:36.906259 containerd[1459]: time="2025-05-17T00:21:36.906064968Z" level=error msg="Failed to destroy network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.907915 containerd[1459]: time="2025-05-17T00:21:36.907844510Z" level=error msg="encountered an error cleaning up failed sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.908291 containerd[1459]: time="2025-05-17T00:21:36.907952707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8sjh,Uid:8567c791-2cba-41d1-9dcc-a920a450b5ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.909343 kubelet[2458]: E0517 00:21:36.909277 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:36.909506 kubelet[2458]: E0517 00:21:36.909372 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k8sjh" May 17 00:21:36.909506 kubelet[2458]: E0517 00:21:36.909407 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k8sjh" May 17 00:21:36.912096 kubelet[2458]: E0517 00:21:36.909506 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k8sjh_kube-system(8567c791-2cba-41d1-9dcc-a920a450b5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k8sjh_kube-system(8567c791-2cba-41d1-9dcc-a920a450b5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k8sjh" podUID="8567c791-2cba-41d1-9dcc-a920a450b5ec" May 17 00:21:36.985226 systemd[1]: Created slice kubepods-besteffort-pod3741cab5_ed2e_41cc_bf79_c5ceb8c1246a.slice - libcontainer container kubepods-besteffort-pod3741cab5_ed2e_41cc_bf79_c5ceb8c1246a.slice. May 17 00:21:36.997696 containerd[1459]: time="2025-05-17T00:21:36.997633561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-789m7,Uid:3741cab5-ed2e-41cc-bf79-c5ceb8c1246a,Namespace:calico-system,Attempt:0,}" May 17 00:21:37.101152 containerd[1459]: time="2025-05-17T00:21:37.101071843Z" level=error msg="Failed to destroy network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.102161 containerd[1459]: time="2025-05-17T00:21:37.102081440Z" level=error msg="encountered an error cleaning up failed sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.102501 containerd[1459]: time="2025-05-17T00:21:37.102458888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-789m7,Uid:3741cab5-ed2e-41cc-bf79-c5ceb8c1246a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.102997 kubelet[2458]: E0517 00:21:37.102934 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.103638 kubelet[2458]: E0517 00:21:37.103018 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-789m7" May 17 00:21:37.103638 kubelet[2458]: E0517 00:21:37.103048 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-789m7" May 17 00:21:37.103638 kubelet[2458]: E0517 00:21:37.103136 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-789m7_calico-system(3741cab5-ed2e-41cc-bf79-c5ceb8c1246a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-789m7_calico-system(3741cab5-ed2e-41cc-bf79-c5ceb8c1246a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:37.228592 kubelet[2458]: I0517 00:21:37.227612 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:37.236624 kubelet[2458]: I0517 00:21:37.235826 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:37.237984 containerd[1459]: time="2025-05-17T00:21:37.237919432Z" level=info msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" May 17 00:21:37.240588 containerd[1459]: time="2025-05-17T00:21:37.240494565Z" level=info msg="Ensure that sandbox 3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed in task-service has been cleanup successfully" May 17 00:21:37.241671 containerd[1459]: time="2025-05-17T00:21:37.241505290Z" level=info msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" May 17 00:21:37.241882 containerd[1459]: time="2025-05-17T00:21:37.241856272Z" level=info msg="Ensure that sandbox 2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31 in task-service has been cleanup successfully" May 17 00:21:37.249114 kubelet[2458]: I0517 00:21:37.248101 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:37.253178 containerd[1459]: time="2025-05-17T00:21:37.253114187Z" level=info msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" May 17 00:21:37.254697 containerd[1459]: time="2025-05-17T00:21:37.254628462Z" level=info msg="Ensure that sandbox 95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426 in task-service has been cleanup successfully" May 17 00:21:37.258603 kubelet[2458]: I0517 00:21:37.257900 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:37.259825 containerd[1459]: time="2025-05-17T00:21:37.259757141Z" level=info msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" May 17 00:21:37.260811 containerd[1459]: time="2025-05-17T00:21:37.260677243Z" level=info msg="Ensure that sandbox 14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357 in task-service has been cleanup successfully" May 17 00:21:37.324480 containerd[1459]: time="2025-05-17T00:21:37.324211093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:21:37.330438 kubelet[2458]: I0517 00:21:37.329638 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:37.339916 containerd[1459]: time="2025-05-17T00:21:37.338782966Z" level=info msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" May 17 00:21:37.341646 kubelet[2458]: I0517 00:21:37.341320 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:37.346451 containerd[1459]: time="2025-05-17T00:21:37.346402787Z" level=info msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" May 17 00:21:37.349648 containerd[1459]: time="2025-05-17T00:21:37.347086191Z" level=info msg="Ensure that sandbox f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f in task-service has been cleanup successfully" May 17 00:21:37.350712 containerd[1459]: time="2025-05-17T00:21:37.350305441Z" level=info msg="Ensure that sandbox 64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735 in task-service has been cleanup successfully" May 17 00:21:37.460876 containerd[1459]: time="2025-05-17T00:21:37.460812101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-g9kg6,Uid:877588de-b588-49a3-a0b6-58a44269c024,Namespace:calico-apiserver,Attempt:0,}" May 17 00:21:37.565274 containerd[1459]: time="2025-05-17T00:21:37.564852958Z" level=error msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" failed" error="failed to destroy network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.566248 kubelet[2458]: E0517 00:21:37.565230 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:37.566547 kubelet[2458]: E0517 00:21:37.566272 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357"} May 17 00:21:37.566547 kubelet[2458]: E0517 00:21:37.566361 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d63156b5-315e-4424-8e51-f55b1ec001db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.566547 kubelet[2458]: E0517 00:21:37.566391 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d63156b5-315e-4424-8e51-f55b1ec001db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n9mg8" podUID="d63156b5-315e-4424-8e51-f55b1ec001db" May 17 00:21:37.580544 containerd[1459]: time="2025-05-17T00:21:37.580290189Z" level=error msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" failed" error="failed to destroy network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.581485 kubelet[2458]: E0517 00:21:37.581440 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:37.581615 kubelet[2458]: E0517 00:21:37.581501 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed"} May 17 00:21:37.581721 kubelet[2458]: E0517 00:21:37.581693 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8e662ac-310d-4631-a4cc-a86f8c336b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.581816 kubelet[2458]: E0517 00:21:37.581739 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8e662ac-310d-4631-a4cc-a86f8c336b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:21:37.595781 containerd[1459]: time="2025-05-17T00:21:37.595702936Z" level=error msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" failed" error="failed to destroy network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.596769 kubelet[2458]: E0517 00:21:37.596694 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:37.596956 kubelet[2458]: E0517 00:21:37.596780 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735"} May 17 00:21:37.596956 kubelet[2458]: E0517 00:21:37.596822 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"853d6b01-d262-432a-a248-addafbd3367b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.596956 kubelet[2458]: E0517 00:21:37.596855 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"853d6b01-d262-432a-a248-addafbd3367b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64f4887fbc-jcxt9" podUID="853d6b01-d262-432a-a248-addafbd3367b" May 17 00:21:37.612568 containerd[1459]: time="2025-05-17T00:21:37.611216427Z" level=error msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" failed" error="failed to destroy network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.612568 containerd[1459]: time="2025-05-17T00:21:37.612186972Z" level=error msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" failed" error="failed to destroy network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.612871 kubelet[2458]: E0517 00:21:37.611633 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:37.612871 kubelet[2458]: E0517 00:21:37.611732 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f"} May 17 00:21:37.612871 kubelet[2458]: E0517 00:21:37.611815 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8567c791-2cba-41d1-9dcc-a920a450b5ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.612871 kubelet[2458]: E0517 00:21:37.611867 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8567c791-2cba-41d1-9dcc-a920a450b5ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k8sjh" podUID="8567c791-2cba-41d1-9dcc-a920a450b5ec" May 17 00:21:37.613099 kubelet[2458]: E0517 00:21:37.612477 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:37.613099 kubelet[2458]: E0517 00:21:37.612608 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31"} May 17 00:21:37.613099 kubelet[2458]: E0517 00:21:37.612660 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.613099 kubelet[2458]: E0517 00:21:37.612685 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-789m7" podUID="3741cab5-ed2e-41cc-bf79-c5ceb8c1246a" May 17 00:21:37.621047 containerd[1459]: time="2025-05-17T00:21:37.620897289Z" level=error msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" failed" error="failed to destroy network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.621656 kubelet[2458]: E0517 00:21:37.621578 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:37.621811 kubelet[2458]: E0517 00:21:37.621673 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426"} May 17 00:21:37.621811 kubelet[2458]: E0517 00:21:37.621717 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"535f48a8-c7ba-4110-afb4-e41a21f02377\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:37.621811 kubelet[2458]: E0517 00:21:37.621749 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"535f48a8-c7ba-4110-afb4-e41a21f02377\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" podUID="535f48a8-c7ba-4110-afb4-e41a21f02377" May 17 00:21:37.692165 containerd[1459]: time="2025-05-17T00:21:37.691794733Z" level=error msg="Failed to destroy network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.692885 containerd[1459]: time="2025-05-17T00:21:37.692840050Z" level=error msg="encountered an error cleaning up failed sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.697694 containerd[1459]: time="2025-05-17T00:21:37.697418678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-g9kg6,Uid:877588de-b588-49a3-a0b6-58a44269c024,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.699145 kubelet[2458]: E0517 00:21:37.697957 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.699145 kubelet[2458]: E0517 00:21:37.698046 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" May 17 00:21:37.699145 kubelet[2458]: E0517 00:21:37.698081 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" May 17 00:21:37.699443 kubelet[2458]: E0517 00:21:37.698251 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5759bc5bd9-g9kg6_calico-apiserver(877588de-b588-49a3-a0b6-58a44269c024)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5759bc5bd9-g9kg6_calico-apiserver(877588de-b588-49a3-a0b6-58a44269c024)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" podUID="877588de-b588-49a3-a0b6-58a44269c024" May 17 00:21:37.742321 containerd[1459]: time="2025-05-17T00:21:37.742228308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-rnppq,Uid:26a01dd2-ca4a-48d8-b676-505f65a04723,Namespace:calico-apiserver,Attempt:0,}" May 17 00:21:37.860360 containerd[1459]: time="2025-05-17T00:21:37.859609077Z" level=error msg="Failed to destroy network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.861711 containerd[1459]: time="2025-05-17T00:21:37.861277128Z" level=error msg="encountered an error cleaning up failed sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.861711 containerd[1459]: time="2025-05-17T00:21:37.861371156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-rnppq,Uid:26a01dd2-ca4a-48d8-b676-505f65a04723,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.862764 kubelet[2458]: E0517 00:21:37.862329 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:37.862764 kubelet[2458]: E0517 00:21:37.862422 2458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" May 17 00:21:37.862764 kubelet[2458]: E0517 00:21:37.862467 2458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" May 17 00:21:37.863048 kubelet[2458]: E0517 00:21:37.862563 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5759bc5bd9-rnppq_calico-apiserver(26a01dd2-ca4a-48d8-b676-505f65a04723)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5759bc5bd9-rnppq_calico-apiserver(26a01dd2-ca4a-48d8-b676-505f65a04723)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" podUID="26a01dd2-ca4a-48d8-b676-505f65a04723" May 17 00:21:38.283054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0-shm.mount: Deactivated successfully. May 17 00:21:38.346409 kubelet[2458]: I0517 00:21:38.346346 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:38.348089 containerd[1459]: time="2025-05-17T00:21:38.347440304Z" level=info msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" May 17 00:21:38.348089 containerd[1459]: time="2025-05-17T00:21:38.348028553Z" level=info msg="Ensure that sandbox 68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62 in task-service has been cleanup successfully" May 17 00:21:38.351259 kubelet[2458]: I0517 00:21:38.350495 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:38.366001 containerd[1459]: time="2025-05-17T00:21:38.365925954Z" level=info msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" May 17 00:21:38.366211 containerd[1459]: time="2025-05-17T00:21:38.366189383Z" level=info msg="Ensure that sandbox ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0 in task-service has been cleanup successfully" May 17 00:21:38.427048 containerd[1459]: time="2025-05-17T00:21:38.426508951Z" level=error msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" failed" error="failed to destroy network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:38.427289 kubelet[2458]: E0517 00:21:38.426918 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:38.427289 kubelet[2458]: E0517 00:21:38.426995 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62"} May 17 00:21:38.427289 kubelet[2458]: E0517 00:21:38.427056 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26a01dd2-ca4a-48d8-b676-505f65a04723\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:38.427289 kubelet[2458]: E0517 00:21:38.427235 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26a01dd2-ca4a-48d8-b676-505f65a04723\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" podUID="26a01dd2-ca4a-48d8-b676-505f65a04723" May 17 00:21:38.443864 containerd[1459]: time="2025-05-17T00:21:38.443441201Z" level=error msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" failed" error="failed to destroy network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:38.444323 kubelet[2458]: E0517 00:21:38.443846 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:38.444323 kubelet[2458]: E0517 00:21:38.443934 2458 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0"} May 17 00:21:38.444323 kubelet[2458]: E0517 00:21:38.444167 2458 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"877588de-b588-49a3-a0b6-58a44269c024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:38.444323 kubelet[2458]: E0517 00:21:38.444226 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"877588de-b588-49a3-a0b6-58a44269c024\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" podUID="877588de-b588-49a3-a0b6-58a44269c024" May 17 00:21:44.197296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197003830.mount: Deactivated successfully. May 17 00:21:44.345607 containerd[1459]: time="2025-05-17T00:21:44.330762698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:21:44.345607 containerd[1459]: time="2025-05-17T00:21:44.344845798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.355258 containerd[1459]: time="2025-05-17T00:21:44.353607477Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.355258 containerd[1459]: time="2025-05-17T00:21:44.354485936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.357422 containerd[1459]: time="2025-05-17T00:21:44.357323641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 7.026284087s" May 17 00:21:44.357422 containerd[1459]: time="2025-05-17T00:21:44.357402672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:21:44.427529 containerd[1459]: time="2025-05-17T00:21:44.427449289Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:21:44.490419 containerd[1459]: time="2025-05-17T00:21:44.490218191Z" level=info msg="CreateContainer within sandbox \"1d2145a6c879023182133d691caaa9864344355f33d4109a04fba4a56123e314\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5b17f37ceea0a81c98cc82d412e11b4066c07ec52df6eda6697f9aecfe4d372\"" May 17 00:21:44.503980 containerd[1459]: time="2025-05-17T00:21:44.503864634Z" level=info msg="StartContainer for \"e5b17f37ceea0a81c98cc82d412e11b4066c07ec52df6eda6697f9aecfe4d372\"" May 17 00:21:44.789646 systemd[1]: Started cri-containerd-e5b17f37ceea0a81c98cc82d412e11b4066c07ec52df6eda6697f9aecfe4d372.scope - libcontainer container e5b17f37ceea0a81c98cc82d412e11b4066c07ec52df6eda6697f9aecfe4d372. May 17 00:21:44.864126 containerd[1459]: time="2025-05-17T00:21:44.858478817Z" level=info msg="StartContainer for \"e5b17f37ceea0a81c98cc82d412e11b4066c07ec52df6eda6697f9aecfe4d372\" returns successfully" May 17 00:21:44.993564 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:21:44.994845 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:21:45.347145 containerd[1459]: time="2025-05-17T00:21:45.346821460Z" level=info msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" May 17 00:21:45.495583 kubelet[2458]: I0517 00:21:45.473584 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4skj9" podStartSLOduration=2.484057001 podStartE2EDuration="19.46046897s" podCreationTimestamp="2025-05-17 00:21:26 +0000 UTC" firstStartedPulling="2025-05-17 00:21:27.381936636 +0000 UTC m=+20.614219415" lastFinishedPulling="2025-05-17 00:21:44.358348617 +0000 UTC m=+37.590631384" observedRunningTime="2025-05-17 00:21:45.440093534 +0000 UTC m=+38.672376321" watchObservedRunningTime="2025-05-17 00:21:45.46046897 +0000 UTC m=+38.692751753" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.515 [INFO][3698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.517 [INFO][3698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" iface="eth0" netns="/var/run/netns/cni-9d947625-14b3-4602-6edd-0d81f4d08a6c" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.520 [INFO][3698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" iface="eth0" netns="/var/run/netns/cni-9d947625-14b3-4602-6edd-0d81f4d08a6c" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.521 [INFO][3698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" iface="eth0" netns="/var/run/netns/cni-9d947625-14b3-4602-6edd-0d81f4d08a6c" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.521 [INFO][3698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.521 [INFO][3698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.755 [INFO][3705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.757 [INFO][3705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.758 [INFO][3705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.775 [WARNING][3705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.775 [INFO][3705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.780 [INFO][3705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:45.786807 containerd[1459]: 2025-05-17 00:21:45.783 [INFO][3698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:21:45.789785 containerd[1459]: time="2025-05-17T00:21:45.789714108Z" level=info msg="TearDown network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" successfully" May 17 00:21:45.790153 containerd[1459]: time="2025-05-17T00:21:45.789863446Z" level=info msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" returns successfully" May 17 00:21:45.793421 systemd[1]: run-netns-cni\x2d9d947625\x2d14b3\x2d4602\x2d6edd\x2d0d81f4d08a6c.mount: Deactivated successfully. May 17 00:21:45.857482 kubelet[2458]: I0517 00:21:45.856635 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr87n\" (UniqueName: \"kubernetes.io/projected/853d6b01-d262-432a-a248-addafbd3367b-kube-api-access-zr87n\") pod \"853d6b01-d262-432a-a248-addafbd3367b\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " May 17 00:21:45.857482 kubelet[2458]: I0517 00:21:45.856821 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/853d6b01-d262-432a-a248-addafbd3367b-whisker-backend-key-pair\") pod \"853d6b01-d262-432a-a248-addafbd3367b\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " May 17 00:21:45.857482 kubelet[2458]: I0517 00:21:45.856892 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853d6b01-d262-432a-a248-addafbd3367b-whisker-ca-bundle\") pod \"853d6b01-d262-432a-a248-addafbd3367b\" (UID: \"853d6b01-d262-432a-a248-addafbd3367b\") " May 17 00:21:45.888644 systemd[1]: var-lib-kubelet-pods-853d6b01\x2dd262\x2d432a\x2da248\x2daddafbd3367b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:21:45.890636 kubelet[2458]: I0517 00:21:45.887153 2458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853d6b01-d262-432a-a248-addafbd3367b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "853d6b01-d262-432a-a248-addafbd3367b" (UID: "853d6b01-d262-432a-a248-addafbd3367b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:21:45.892710 kubelet[2458]: I0517 00:21:45.884796 2458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853d6b01-d262-432a-a248-addafbd3367b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "853d6b01-d262-432a-a248-addafbd3367b" (UID: "853d6b01-d262-432a-a248-addafbd3367b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:21:45.897690 kubelet[2458]: I0517 00:21:45.896752 2458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d6b01-d262-432a-a248-addafbd3367b-kube-api-access-zr87n" (OuterVolumeSpecName: "kube-api-access-zr87n") pod "853d6b01-d262-432a-a248-addafbd3367b" (UID: "853d6b01-d262-432a-a248-addafbd3367b"). InnerVolumeSpecName "kube-api-access-zr87n". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:21:45.901385 systemd[1]: var-lib-kubelet-pods-853d6b01\x2dd262\x2d432a\x2da248\x2daddafbd3367b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzr87n.mount: Deactivated successfully. May 17 00:21:45.961823 kubelet[2458]: I0517 00:21:45.961660 2458 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zr87n\" (UniqueName: \"kubernetes.io/projected/853d6b01-d262-432a-a248-addafbd3367b-kube-api-access-zr87n\") on node \"ci-4081.3.3-n-6deca81674\" DevicePath \"\"" May 17 00:21:45.961823 kubelet[2458]: I0517 00:21:45.961754 2458 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/853d6b01-d262-432a-a248-addafbd3367b-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-6deca81674\" DevicePath \"\"" May 17 00:21:45.961823 kubelet[2458]: I0517 00:21:45.961774 2458 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853d6b01-d262-432a-a248-addafbd3367b-whisker-ca-bundle\") on node \"ci-4081.3.3-n-6deca81674\" DevicePath \"\"" May 17 00:21:46.418593 systemd[1]: Removed slice kubepods-besteffort-pod853d6b01_d262_432a_a248_addafbd3367b.slice - libcontainer container kubepods-besteffort-pod853d6b01_d262_432a_a248_addafbd3367b.slice. May 17 00:21:46.574558 systemd[1]: Created slice kubepods-besteffort-pode57b1bf0_91c0_4dac_b141_813710ca490d.slice - libcontainer container kubepods-besteffort-pode57b1bf0_91c0_4dac_b141_813710ca490d.slice. May 17 00:21:46.673985 kubelet[2458]: I0517 00:21:46.673801 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e57b1bf0-91c0-4dac-b141-813710ca490d-whisker-backend-key-pair\") pod \"whisker-599954c495-65fz2\" (UID: \"e57b1bf0-91c0-4dac-b141-813710ca490d\") " pod="calico-system/whisker-599954c495-65fz2" May 17 00:21:46.673985 kubelet[2458]: I0517 00:21:46.673924 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e57b1bf0-91c0-4dac-b141-813710ca490d-whisker-ca-bundle\") pod \"whisker-599954c495-65fz2\" (UID: \"e57b1bf0-91c0-4dac-b141-813710ca490d\") " pod="calico-system/whisker-599954c495-65fz2" May 17 00:21:46.674690 kubelet[2458]: I0517 00:21:46.673989 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt66p\" (UniqueName: \"kubernetes.io/projected/e57b1bf0-91c0-4dac-b141-813710ca490d-kube-api-access-pt66p\") pod \"whisker-599954c495-65fz2\" (UID: \"e57b1bf0-91c0-4dac-b141-813710ca490d\") " pod="calico-system/whisker-599954c495-65fz2" May 17 00:21:46.881646 containerd[1459]: time="2025-05-17T00:21:46.881600065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-599954c495-65fz2,Uid:e57b1bf0-91c0-4dac-b141-813710ca490d,Namespace:calico-system,Attempt:0,}" May 17 00:21:47.006916 kubelet[2458]: I0517 00:21:47.005250 2458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d6b01-d262-432a-a248-addafbd3367b" path="/var/lib/kubelet/pods/853d6b01-d262-432a-a248-addafbd3367b/volumes" May 17 00:21:47.159011 systemd-networkd[1370]: cali9f9371f67ec: Link UP May 17 00:21:47.160135 systemd-networkd[1370]: cali9f9371f67ec: Gained carrier May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:46.944 [INFO][3768] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:46.966 [INFO][3768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0 whisker-599954c495- calico-system e57b1bf0-91c0-4dac-b141-813710ca490d 966 0 2025-05-17 00:21:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:599954c495 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 whisker-599954c495-65fz2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9f9371f67ec [] [] }} ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:46.966 [INFO][3768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.019 [INFO][3781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" HandleID="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.020 [INFO][3781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" HandleID="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d96f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"whisker-599954c495-65fz2", "timestamp":"2025-05-17 00:21:47.01983375 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.020 [INFO][3781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.020 [INFO][3781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.020 [INFO][3781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.033 [INFO][3781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.049 [INFO][3781] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.058 [INFO][3781] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.062 [INFO][3781] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.066 [INFO][3781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.066 [INFO][3781] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.070 [INFO][3781] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.082 [INFO][3781] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.092 [INFO][3781] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.129/26] block=192.168.81.128/26 handle="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.092 [INFO][3781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.129/26] handle="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" host="ci-4081.3.3-n-6deca81674" May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.093 [INFO][3781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:47.191918 containerd[1459]: 2025-05-17 00:21:47.093 [INFO][3781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.129/26] IPv6=[] ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" HandleID="k8s-pod-network.4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.102 [INFO][3768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0", GenerateName:"whisker-599954c495-", Namespace:"calico-system", SelfLink:"", UID:"e57b1bf0-91c0-4dac-b141-813710ca490d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"599954c495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"whisker-599954c495-65fz2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f9371f67ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.102 [INFO][3768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.129/32] ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.102 [INFO][3768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f9371f67ec ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.155 [INFO][3768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.156 [INFO][3768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0", GenerateName:"whisker-599954c495-", Namespace:"calico-system", SelfLink:"", UID:"e57b1bf0-91c0-4dac-b141-813710ca490d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"599954c495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf", Pod:"whisker-599954c495-65fz2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f9371f67ec", MAC:"c6:9e:79:24:83:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:47.196303 containerd[1459]: 2025-05-17 00:21:47.184 [INFO][3768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf" Namespace="calico-system" Pod="whisker-599954c495-65fz2" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--599954c495--65fz2-eth0" May 17 00:21:47.278969 containerd[1459]: time="2025-05-17T00:21:47.278672152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:47.279742 containerd[1459]: time="2025-05-17T00:21:47.279611580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:47.279957 containerd[1459]: time="2025-05-17T00:21:47.279687190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:47.289586 containerd[1459]: time="2025-05-17T00:21:47.289444221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:47.319877 systemd[1]: Started cri-containerd-4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf.scope - libcontainer container 4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf. May 17 00:21:47.476564 containerd[1459]: time="2025-05-17T00:21:47.476441838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-599954c495-65fz2,Uid:e57b1bf0-91c0-4dac-b141-813710ca490d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ce357c02abe6a418a16220c1c4c0463c405d5b87eeb1f0df83b38b4b30fd0cf\"" May 17 00:21:47.495589 containerd[1459]: time="2025-05-17T00:21:47.495456875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:21:47.786566 containerd[1459]: time="2025-05-17T00:21:47.786319392Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:47.788576 containerd[1459]: time="2025-05-17T00:21:47.788021479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:47.788576 containerd[1459]: time="2025-05-17T00:21:47.788060174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:21:47.798079 kubelet[2458]: E0517 00:21:47.797830 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:47.798079 kubelet[2458]: E0517 00:21:47.798040 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:47.805310 kubelet[2458]: E0517 00:21:47.805224 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7f28aac9d54108bed1a27017a72b7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:47.810263 containerd[1459]: time="2025-05-17T00:21:47.810031122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:21:48.060836 containerd[1459]: time="2025-05-17T00:21:48.060457595Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:48.063682 containerd[1459]: time="2025-05-17T00:21:48.063578838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:48.065228 containerd[1459]: time="2025-05-17T00:21:48.063620197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:21:48.065676 kubelet[2458]: E0517 00:21:48.064797 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:48.065676 kubelet[2458]: E0517 00:21:48.064869 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:48.065846 kubelet[2458]: E0517 00:21:48.065064 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:48.067589 kubelet[2458]: E0517 00:21:48.066382 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:21:48.165587 kernel: bpftool[3978]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:21:48.412085 kubelet[2458]: E0517 00:21:48.411958 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:21:48.516069 systemd-networkd[1370]: vxlan.calico: Link UP May 17 00:21:48.516076 systemd-networkd[1370]: vxlan.calico: Gained carrier May 17 00:21:48.973796 containerd[1459]: time="2025-05-17T00:21:48.973152963Z" level=info msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" May 17 00:21:48.977444 containerd[1459]: time="2025-05-17T00:21:48.976187592Z" level=info msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" May 17 00:21:49.116496 systemd-networkd[1370]: cali9f9371f67ec: Gained IPv6LL May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.063 [INFO][4064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.065 [INFO][4064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" iface="eth0" netns="/var/run/netns/cni-b726ee27-9fc0-0d5d-9ea3-dcfdea3c639f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.065 [INFO][4064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" iface="eth0" netns="/var/run/netns/cni-b726ee27-9fc0-0d5d-9ea3-dcfdea3c639f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.066 [INFO][4064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" iface="eth0" netns="/var/run/netns/cni-b726ee27-9fc0-0d5d-9ea3-dcfdea3c639f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.066 [INFO][4064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.066 [INFO][4064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.117 [INFO][4081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.117 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.117 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.133 [WARNING][4081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.133 [INFO][4081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.139 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:49.153027 containerd[1459]: 2025-05-17 00:21:49.144 [INFO][4064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:21:49.157871 containerd[1459]: time="2025-05-17T00:21:49.154734023Z" level=info msg="TearDown network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" successfully" May 17 00:21:49.157871 containerd[1459]: time="2025-05-17T00:21:49.154786232Z" level=info msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" returns successfully" May 17 00:21:49.159913 containerd[1459]: time="2025-05-17T00:21:49.159206916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8sjh,Uid:8567c791-2cba-41d1-9dcc-a920a450b5ec,Namespace:kube-system,Attempt:1,}" May 17 00:21:49.159971 kubelet[2458]: E0517 00:21:49.158034 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:49.164050 systemd[1]: run-netns-cni\x2db726ee27\x2d9fc0\x2d0d5d\x2d9ea3\x2ddcfdea3c639f.mount: Deactivated successfully. May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.073 [INFO][4068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.073 [INFO][4068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" iface="eth0" netns="/var/run/netns/cni-03e7c3a6-8573-83f2-b850-07982823728e" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.073 [INFO][4068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" iface="eth0" netns="/var/run/netns/cni-03e7c3a6-8573-83f2-b850-07982823728e" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.074 [INFO][4068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" iface="eth0" netns="/var/run/netns/cni-03e7c3a6-8573-83f2-b850-07982823728e" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.074 [INFO][4068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.074 [INFO][4068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.130 [INFO][4083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.130 [INFO][4083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.139 [INFO][4083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.150 [WARNING][4083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.150 [INFO][4083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.154 [INFO][4083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:49.176701 containerd[1459]: 2025-05-17 00:21:49.168 [INFO][4068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:21:49.177648 containerd[1459]: time="2025-05-17T00:21:49.177306573Z" level=info msg="TearDown network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" successfully" May 17 00:21:49.177648 containerd[1459]: time="2025-05-17T00:21:49.177350894Z" level=info msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" returns successfully" May 17 00:21:49.179233 containerd[1459]: time="2025-05-17T00:21:49.178816010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8bd85b9-zb9lw,Uid:535f48a8-c7ba-4110-afb4-e41a21f02377,Namespace:calico-system,Attempt:1,}" May 17 00:21:49.180772 systemd[1]: run-netns-cni\x2d03e7c3a6\x2d8573\x2d83f2\x2db850\x2d07982823728e.mount: Deactivated successfully. May 17 00:21:49.415838 kubelet[2458]: E0517 00:21:49.415768 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:21:49.462932 systemd-networkd[1370]: cali686c174d889: Link UP May 17 00:21:49.466902 systemd-networkd[1370]: cali686c174d889: Gained carrier May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.281 [INFO][4095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0 coredns-674b8bbfcf- kube-system 8567c791-2cba-41d1-9dcc-a920a450b5ec 992 0 2025-05-17 00:21:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 coredns-674b8bbfcf-k8sjh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali686c174d889 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.281 [INFO][4095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.367 [INFO][4121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" HandleID="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.367 [INFO][4121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" HandleID="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"coredns-674b8bbfcf-k8sjh", "timestamp":"2025-05-17 00:21:49.367442776 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.367 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.367 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.367 [INFO][4121] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.379 [INFO][4121] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.390 [INFO][4121] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.399 [INFO][4121] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.402 [INFO][4121] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.406 [INFO][4121] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.406 [INFO][4121] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.410 [INFO][4121] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.423 [INFO][4121] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.435 [INFO][4121] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.130/26] block=192.168.81.128/26 handle="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.436 [INFO][4121] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.130/26] handle="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.436 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:49.495174 containerd[1459]: 2025-05-17 00:21:49.436 [INFO][4121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.130/26] IPv6=[] ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" HandleID="k8s-pod-network.da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.443 [INFO][4095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8567c791-2cba-41d1-9dcc-a920a450b5ec", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"coredns-674b8bbfcf-k8sjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c174d889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.448 [INFO][4095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.130/32] ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.448 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali686c174d889 ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.467 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.468 [INFO][4095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8567c791-2cba-41d1-9dcc-a920a450b5ec", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b", Pod:"coredns-674b8bbfcf-k8sjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c174d889", MAC:"5a:fc:2d:8b:99:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:49.497356 containerd[1459]: 2025-05-17 00:21:49.486 [INFO][4095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8sjh" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:21:49.539610 containerd[1459]: time="2025-05-17T00:21:49.539215793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.539610 containerd[1459]: time="2025-05-17T00:21:49.539290252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.539610 containerd[1459]: time="2025-05-17T00:21:49.539307948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.540688 containerd[1459]: time="2025-05-17T00:21:49.540615810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.581217 systemd[1]: Started cri-containerd-da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b.scope - libcontainer container da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b. May 17 00:21:49.587110 systemd-networkd[1370]: calib88c5cffaaa: Link UP May 17 00:21:49.587379 systemd-networkd[1370]: calib88c5cffaaa: Gained carrier May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.324 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0 calico-kube-controllers-7f8bd85b9- calico-system 535f48a8-c7ba-4110-afb4-e41a21f02377 991 0 2025-05-17 00:21:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f8bd85b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 calico-kube-controllers-7f8bd85b9-zb9lw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib88c5cffaaa [] [] }} ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.325 [INFO][4109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.381 [INFO][4127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" HandleID="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.381 [INFO][4127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" HandleID="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"calico-kube-controllers-7f8bd85b9-zb9lw", "timestamp":"2025-05-17 00:21:49.381084266 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.381 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.436 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.440 [INFO][4127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.481 [INFO][4127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.501 [INFO][4127] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.518 [INFO][4127] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.524 [INFO][4127] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.531 [INFO][4127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.531 [INFO][4127] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.537 [INFO][4127] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.545 [INFO][4127] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.570 [INFO][4127] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.131/26] block=192.168.81.128/26 handle="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.570 [INFO][4127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.131/26] handle="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" host="ci-4081.3.3-n-6deca81674" May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.570 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:49.626075 containerd[1459]: 2025-05-17 00:21:49.570 [INFO][4127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.131/26] IPv6=[] ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" HandleID="k8s-pod-network.f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.577 [INFO][4109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0", GenerateName:"calico-kube-controllers-7f8bd85b9-", Namespace:"calico-system", SelfLink:"", UID:"535f48a8-c7ba-4110-afb4-e41a21f02377", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8bd85b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"calico-kube-controllers-7f8bd85b9-zb9lw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib88c5cffaaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.579 [INFO][4109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.131/32] ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.579 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib88c5cffaaa ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.589 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.591 [INFO][4109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0", GenerateName:"calico-kube-controllers-7f8bd85b9-", Namespace:"calico-system", SelfLink:"", UID:"535f48a8-c7ba-4110-afb4-e41a21f02377", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8bd85b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f", Pod:"calico-kube-controllers-7f8bd85b9-zb9lw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib88c5cffaaa", MAC:"22:4d:17:6a:2f:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:49.629813 containerd[1459]: 2025-05-17 00:21:49.617 [INFO][4109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f" Namespace="calico-system" Pod="calico-kube-controllers-7f8bd85b9-zb9lw" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:21:49.627725 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL May 17 00:21:49.684297 containerd[1459]: time="2025-05-17T00:21:49.683975043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.684985 containerd[1459]: time="2025-05-17T00:21:49.684685217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.684985 containerd[1459]: time="2025-05-17T00:21:49.684835563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.692568 containerd[1459]: time="2025-05-17T00:21:49.690430394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.695217 containerd[1459]: time="2025-05-17T00:21:49.695167643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8sjh,Uid:8567c791-2cba-41d1-9dcc-a920a450b5ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b\"" May 17 00:21:49.696619 kubelet[2458]: E0517 00:21:49.696574 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:49.704444 containerd[1459]: time="2025-05-17T00:21:49.704396542Z" level=info msg="CreateContainer within sandbox \"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:49.736836 systemd[1]: Started cri-containerd-f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f.scope - libcontainer container f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f. May 17 00:21:49.746263 containerd[1459]: time="2025-05-17T00:21:49.746172926Z" level=info msg="CreateContainer within sandbox \"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d65bbdd873fddb2e01e521fec633b8e8a4411898dac008dfa322717911d0f4da\"" May 17 00:21:49.747506 containerd[1459]: time="2025-05-17T00:21:49.747447627Z" level=info msg="StartContainer for \"d65bbdd873fddb2e01e521fec633b8e8a4411898dac008dfa322717911d0f4da\"" May 17 00:21:49.806825 systemd[1]: Started cri-containerd-d65bbdd873fddb2e01e521fec633b8e8a4411898dac008dfa322717911d0f4da.scope - libcontainer container d65bbdd873fddb2e01e521fec633b8e8a4411898dac008dfa322717911d0f4da. May 17 00:21:49.830554 containerd[1459]: time="2025-05-17T00:21:49.830028565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8bd85b9-zb9lw,Uid:535f48a8-c7ba-4110-afb4-e41a21f02377,Namespace:calico-system,Attempt:1,} returns sandbox id \"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f\"" May 17 00:21:49.835547 containerd[1459]: time="2025-05-17T00:21:49.835243525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:21:49.874266 containerd[1459]: time="2025-05-17T00:21:49.874061547Z" level=info msg="StartContainer for \"d65bbdd873fddb2e01e521fec633b8e8a4411898dac008dfa322717911d0f4da\" returns successfully" May 17 00:21:49.978509 containerd[1459]: time="2025-05-17T00:21:49.977197143Z" level=info msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.077 [INFO][4278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.078 [INFO][4278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" iface="eth0" netns="/var/run/netns/cni-6aa64d9b-7421-0c0b-d42f-37102fdc9db9" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.080 [INFO][4278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" iface="eth0" netns="/var/run/netns/cni-6aa64d9b-7421-0c0b-d42f-37102fdc9db9" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.086 [INFO][4278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" iface="eth0" netns="/var/run/netns/cni-6aa64d9b-7421-0c0b-d42f-37102fdc9db9" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.089 [INFO][4278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.089 [INFO][4278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.184 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.185 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.185 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.199 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.199 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.202 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:50.208413 containerd[1459]: 2025-05-17 00:21:50.205 [INFO][4278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:21:50.214778 containerd[1459]: time="2025-05-17T00:21:50.208621999Z" level=info msg="TearDown network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" successfully" May 17 00:21:50.214778 containerd[1459]: time="2025-05-17T00:21:50.208658981Z" level=info msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" returns successfully" May 17 00:21:50.214778 containerd[1459]: time="2025-05-17T00:21:50.209976647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-789m7,Uid:3741cab5-ed2e-41cc-bf79-c5ceb8c1246a,Namespace:calico-system,Attempt:1,}" May 17 00:21:50.219083 systemd[1]: run-netns-cni\x2d6aa64d9b\x2d7421\x2d0c0b\x2dd42f\x2d37102fdc9db9.mount: Deactivated successfully. May 17 00:21:50.432052 kubelet[2458]: E0517 00:21:50.430930 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:50.432884 systemd-networkd[1370]: cali7dca463b334: Link UP May 17 00:21:50.435936 systemd-networkd[1370]: cali7dca463b334: Gained carrier May 17 00:21:50.482096 kubelet[2458]: I0517 00:21:50.480784 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k8sjh" podStartSLOduration=38.480753005 podStartE2EDuration="38.480753005s" podCreationTimestamp="2025-05-17 00:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:50.477869852 +0000 UTC m=+43.710152638" watchObservedRunningTime="2025-05-17 00:21:50.480753005 +0000 UTC m=+43.713035791" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.308 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0 csi-node-driver- calico-system 3741cab5-ed2e-41cc-bf79-c5ceb8c1246a 1013 0 2025-05-17 00:21:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 csi-node-driver-789m7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7dca463b334 [] [] }} ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.309 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.357 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" HandleID="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.357 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" HandleID="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d30f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"csi-node-driver-789m7", "timestamp":"2025-05-17 00:21:50.357055853 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.357 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.357 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.357 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.368 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.377 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.387 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.390 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.394 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.394 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.398 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.405 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.419 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.132/26] block=192.168.81.128/26 handle="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.419 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.132/26] handle="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.420 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:50.486737 containerd[1459]: 2025-05-17 00:21:50.420 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.132/26] IPv6=[] ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" HandleID="k8s-pod-network.15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.424 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"csi-node-driver-789m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dca463b334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.425 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.132/32] ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.425 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dca463b334 ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.439 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.439 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c", Pod:"csi-node-driver-789m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dca463b334", MAC:"b2:91:01:7c:ac:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:50.490024 containerd[1459]: 2025-05-17 00:21:50.482 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c" Namespace="calico-system" Pod="csi-node-driver-789m7" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:21:50.544269 containerd[1459]: time="2025-05-17T00:21:50.540821279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:50.544269 containerd[1459]: time="2025-05-17T00:21:50.540908402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:50.544269 containerd[1459]: time="2025-05-17T00:21:50.541066766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:50.544269 containerd[1459]: time="2025-05-17T00:21:50.541214760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:50.589674 systemd-networkd[1370]: cali686c174d889: Gained IPv6LL May 17 00:21:50.597100 systemd[1]: Started cri-containerd-15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c.scope - libcontainer container 15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c. May 17 00:21:50.671450 containerd[1459]: time="2025-05-17T00:21:50.671371848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-789m7,Uid:3741cab5-ed2e-41cc-bf79-c5ceb8c1246a,Namespace:calico-system,Attempt:1,} returns sandbox id \"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c\"" May 17 00:21:50.908660 systemd-networkd[1370]: calib88c5cffaaa: Gained IPv6LL May 17 00:21:50.973932 containerd[1459]: time="2025-05-17T00:21:50.973848005Z" level=info msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.097 [INFO][4377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.097 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" iface="eth0" netns="/var/run/netns/cni-8ea1c6ab-7c39-5e38-7a8d-89dd7d08d4eb" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.098 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" iface="eth0" netns="/var/run/netns/cni-8ea1c6ab-7c39-5e38-7a8d-89dd7d08d4eb" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.099 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" iface="eth0" netns="/var/run/netns/cni-8ea1c6ab-7c39-5e38-7a8d-89dd7d08d4eb" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.099 [INFO][4377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.099 [INFO][4377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.150 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.151 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.151 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.163 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.163 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.167 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:51.175281 containerd[1459]: 2025-05-17 00:21:51.171 [INFO][4377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:21:51.179845 containerd[1459]: time="2025-05-17T00:21:51.179748789Z" level=info msg="TearDown network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" successfully" May 17 00:21:51.179845 containerd[1459]: time="2025-05-17T00:21:51.179794035Z" level=info msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" returns successfully" May 17 00:21:51.180882 containerd[1459]: time="2025-05-17T00:21:51.180846108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l9z28,Uid:c8e662ac-310d-4631-a4cc-a86f8c336b26,Namespace:calico-system,Attempt:1,}" May 17 00:21:51.186297 systemd[1]: run-netns-cni\x2d8ea1c6ab\x2d7c39\x2d5e38\x2d7a8d\x2d89dd7d08d4eb.mount: Deactivated successfully. May 17 00:21:51.429314 systemd-networkd[1370]: cali6df082cce77: Link UP May 17 00:21:51.433434 systemd-networkd[1370]: cali6df082cce77: Gained carrier May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.274 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0 goldmane-78d55f7ddc- calico-system c8e662ac-310d-4631-a4cc-a86f8c336b26 1031 0 2025-05-17 00:21:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 goldmane-78d55f7ddc-l9z28 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6df082cce77 [] [] }} ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.275 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.325 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" HandleID="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.325 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" HandleID="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"goldmane-78d55f7ddc-l9z28", "timestamp":"2025-05-17 00:21:51.325199468 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.325 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.325 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.325 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.341 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.351 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.362 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.367 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.381 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.382 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.391 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.400 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.413 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.133/26] block=192.168.81.128/26 handle="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.413 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.133/26] handle="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" host="ci-4081.3.3-n-6deca81674" May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.413 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:51.471335 containerd[1459]: 2025-05-17 00:21:51.413 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.133/26] IPv6=[] ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" HandleID="k8s-pod-network.fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.421 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"c8e662ac-310d-4631-a4cc-a86f8c336b26", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"goldmane-78d55f7ddc-l9z28", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6df082cce77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.421 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.133/32] ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.421 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6df082cce77 ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.434 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.435 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"c8e662ac-310d-4631-a4cc-a86f8c336b26", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e", Pod:"goldmane-78d55f7ddc-l9z28", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6df082cce77", MAC:"2a:a6:5e:14:5f:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:51.475961 containerd[1459]: 2025-05-17 00:21:51.460 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-l9z28" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:21:51.482666 kubelet[2458]: E0517 00:21:51.482379 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:51.543483 containerd[1459]: time="2025-05-17T00:21:51.539508135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:51.543483 containerd[1459]: time="2025-05-17T00:21:51.542167999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:51.543483 containerd[1459]: time="2025-05-17T00:21:51.542209413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:51.543483 containerd[1459]: time="2025-05-17T00:21:51.542708000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:51.582018 systemd[1]: Started cri-containerd-fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e.scope - libcontainer container fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e. May 17 00:21:51.700326 containerd[1459]: time="2025-05-17T00:21:51.700164188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-l9z28,Uid:c8e662ac-310d-4631-a4cc-a86f8c336b26,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e\"" May 17 00:21:51.973421 containerd[1459]: time="2025-05-17T00:21:51.973049264Z" level=info msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" May 17 00:21:51.973421 containerd[1459]: time="2025-05-17T00:21:51.973119226Z" level=info msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" May 17 00:21:52.187870 systemd-networkd[1370]: cali7dca463b334: Gained IPv6LL May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.161 [INFO][4483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.161 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" iface="eth0" netns="/var/run/netns/cni-7eda9b77-2831-684b-df56-ae23d4a4291e" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.162 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" iface="eth0" netns="/var/run/netns/cni-7eda9b77-2831-684b-df56-ae23d4a4291e" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.165 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" iface="eth0" netns="/var/run/netns/cni-7eda9b77-2831-684b-df56-ae23d4a4291e" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.165 [INFO][4483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.165 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.239 [INFO][4499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.240 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.240 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.263 [WARNING][4499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.263 [INFO][4499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.268 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:52.283112 containerd[1459]: 2025-05-17 00:21:52.277 [INFO][4483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:21:52.287793 containerd[1459]: time="2025-05-17T00:21:52.285243186Z" level=info msg="TearDown network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" successfully" May 17 00:21:52.287793 containerd[1459]: time="2025-05-17T00:21:52.285284588Z" level=info msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" returns successfully" May 17 00:21:52.286974 systemd[1]: run-netns-cni\x2d7eda9b77\x2d2831\x2d684b\x2ddf56\x2dae23d4a4291e.mount: Deactivated successfully. May 17 00:21:52.291340 containerd[1459]: time="2025-05-17T00:21:52.291187265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-g9kg6,Uid:877588de-b588-49a3-a0b6-58a44269c024,Namespace:calico-apiserver,Attempt:1,}" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.136 [INFO][4484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.140 [INFO][4484] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" iface="eth0" netns="/var/run/netns/cni-ea4aa0f6-5b13-36ad-4aea-19575d7a0720" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.151 [INFO][4484] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" iface="eth0" netns="/var/run/netns/cni-ea4aa0f6-5b13-36ad-4aea-19575d7a0720" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.151 [INFO][4484] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" iface="eth0" netns="/var/run/netns/cni-ea4aa0f6-5b13-36ad-4aea-19575d7a0720" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.151 [INFO][4484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.151 [INFO][4484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.296 [INFO][4496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.299 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.300 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.331 [WARNING][4496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.332 [INFO][4496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.341 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:52.359000 containerd[1459]: 2025-05-17 00:21:52.351 [INFO][4484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:21:52.362634 containerd[1459]: time="2025-05-17T00:21:52.359852746Z" level=info msg="TearDown network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" successfully" May 17 00:21:52.362634 containerd[1459]: time="2025-05-17T00:21:52.359888960Z" level=info msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" returns successfully" May 17 00:21:52.363689 containerd[1459]: time="2025-05-17T00:21:52.363079917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-rnppq,Uid:26a01dd2-ca4a-48d8-b676-505f65a04723,Namespace:calico-apiserver,Attempt:1,}" May 17 00:21:52.366927 systemd[1]: run-netns-cni\x2dea4aa0f6\x2d5b13\x2d36ad\x2d4aea\x2d19575d7a0720.mount: Deactivated successfully. May 17 00:21:52.503057 kubelet[2458]: E0517 00:21:52.503009 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:52.708262 systemd-networkd[1370]: calib058d04a49b: Link UP May 17 00:21:52.717172 systemd-networkd[1370]: calib058d04a49b: Gained carrier May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.478 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0 calico-apiserver-5759bc5bd9- calico-apiserver 877588de-b588-49a3-a0b6-58a44269c024 1042 0 2025-05-17 00:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5759bc5bd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 calico-apiserver-5759bc5bd9-g9kg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib058d04a49b [] [] }} ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.478 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.582 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" HandleID="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.583 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" HandleID="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d93c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-6deca81674", "pod":"calico-apiserver-5759bc5bd9-g9kg6", "timestamp":"2025-05-17 00:21:52.582791173 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.583 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.583 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.583 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.604 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.626 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.640 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.645 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.652 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.652 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.655 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.668 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.683 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.134/26] block=192.168.81.128/26 handle="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.683 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.134/26] handle="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.683 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:52.763513 containerd[1459]: 2025-05-17 00:21:52.683 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.134/26] IPv6=[] ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" HandleID="k8s-pod-network.f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.694 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"877588de-b588-49a3-a0b6-58a44269c024", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"calico-apiserver-5759bc5bd9-g9kg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib058d04a49b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.694 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.134/32] ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.694 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib058d04a49b ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.726 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.729 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"877588de-b588-49a3-a0b6-58a44269c024", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c", Pod:"calico-apiserver-5759bc5bd9-g9kg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib058d04a49b", MAC:"8e:19:1e:6a:d0:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.766459 containerd[1459]: 2025-05-17 00:21:52.756 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-g9kg6" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:21:52.877901 systemd-networkd[1370]: calic68c10de7a4: Link UP May 17 00:21:52.889359 containerd[1459]: time="2025-05-17T00:21:52.885273562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:52.889359 containerd[1459]: time="2025-05-17T00:21:52.885386654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:52.889359 containerd[1459]: time="2025-05-17T00:21:52.885436785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:52.890782 systemd-networkd[1370]: calic68c10de7a4: Gained carrier May 17 00:21:52.895926 containerd[1459]: time="2025-05-17T00:21:52.894301591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.527 [INFO][4521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0 calico-apiserver-5759bc5bd9- calico-apiserver 26a01dd2-ca4a-48d8-b676-505f65a04723 1041 0 2025-05-17 00:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5759bc5bd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 calico-apiserver-5759bc5bd9-rnppq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic68c10de7a4 [] [] }} ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.528 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.682 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" HandleID="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.682 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" HandleID="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-6deca81674", "pod":"calico-apiserver-5759bc5bd9-rnppq", "timestamp":"2025-05-17 00:21:52.682183102 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.682 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.684 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.685 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.716 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.737 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.760 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.768 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.777 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.779 [INFO][4542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.786 [INFO][4542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.803 [INFO][4542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.823 [INFO][4542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.135/26] block=192.168.81.128/26 handle="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.824 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.135/26] handle="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" host="ci-4081.3.3-n-6deca81674" May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.824 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:52.938318 containerd[1459]: 2025-05-17 00:21:52.824 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.135/26] IPv6=[] ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" HandleID="k8s-pod-network.64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.853 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a01dd2-ca4a-48d8-b676-505f65a04723", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"calico-apiserver-5759bc5bd9-rnppq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic68c10de7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.853 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.135/32] ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.853 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic68c10de7a4 ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.905 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.906 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a01dd2-ca4a-48d8-b676-505f65a04723", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d", Pod:"calico-apiserver-5759bc5bd9-rnppq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic68c10de7a4", MAC:"c2:a7:76:1c:45:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.939380 containerd[1459]: 2025-05-17 00:21:52.927 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d" Namespace="calico-apiserver" Pod="calico-apiserver-5759bc5bd9-rnppq" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:21:52.982746 containerd[1459]: time="2025-05-17T00:21:52.981914119Z" level=info msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" May 17 00:21:52.993909 systemd[1]: Started cri-containerd-f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c.scope - libcontainer container f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c. May 17 00:21:53.020237 systemd-networkd[1370]: cali6df082cce77: Gained IPv6LL May 17 00:21:53.073958 containerd[1459]: time="2025-05-17T00:21:53.072825477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:53.073958 containerd[1459]: time="2025-05-17T00:21:53.072901493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:53.073958 containerd[1459]: time="2025-05-17T00:21:53.072916059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:53.073958 containerd[1459]: time="2025-05-17T00:21:53.073059989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:53.129795 systemd[1]: Started cri-containerd-64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d.scope - libcontainer container 64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d. May 17 00:21:53.317095 containerd[1459]: time="2025-05-17T00:21:53.317041358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-g9kg6,Uid:877588de-b588-49a3-a0b6-58a44269c024,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c\"" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.184 [INFO][4610] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.184 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" iface="eth0" netns="/var/run/netns/cni-75b9e332-7f63-e0ba-5e97-24c2c75334f6" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.185 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" iface="eth0" netns="/var/run/netns/cni-75b9e332-7f63-e0ba-5e97-24c2c75334f6" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.186 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" iface="eth0" netns="/var/run/netns/cni-75b9e332-7f63-e0ba-5e97-24c2c75334f6" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.186 [INFO][4610] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.186 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.287 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.288 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.288 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.306 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.306 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.309 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:53.332693 containerd[1459]: 2025-05-17 00:21:53.320 [INFO][4610] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:21:53.332693 containerd[1459]: time="2025-05-17T00:21:53.329124891Z" level=info msg="TearDown network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" successfully" May 17 00:21:53.332693 containerd[1459]: time="2025-05-17T00:21:53.329196133Z" level=info msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" returns successfully" May 17 00:21:53.339430 kubelet[2458]: E0517 00:21:53.330882 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:53.340766 containerd[1459]: time="2025-05-17T00:21:53.335458784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9mg8,Uid:d63156b5-315e-4424-8e51-f55b1ec001db,Namespace:kube-system,Attempt:1,}" May 17 00:21:53.340314 systemd[1]: run-netns-cni\x2d75b9e332\x2d7f63\x2de0ba\x2d5e97\x2d24c2c75334f6.mount: Deactivated successfully. May 17 00:21:53.386406 containerd[1459]: time="2025-05-17T00:21:53.386162421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5759bc5bd9-rnppq,Uid:26a01dd2-ca4a-48d8-b676-505f65a04723,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d\"" May 17 00:21:53.770855 systemd-networkd[1370]: calicdbdd55df4d: Link UP May 17 00:21:53.775846 systemd-networkd[1370]: calicdbdd55df4d: Gained carrier May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.475 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0 coredns-674b8bbfcf- kube-system d63156b5-315e-4424-8e51-f55b1ec001db 1052 0 2025-05-17 00:21:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-6deca81674 coredns-674b8bbfcf-n9mg8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicdbdd55df4d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.475 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.610 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" HandleID="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.611 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" HandleID="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-6deca81674", "pod":"coredns-674b8bbfcf-n9mg8", "timestamp":"2025-05-17 00:21:53.61095272 +0000 UTC"}, Hostname:"ci-4081.3.3-n-6deca81674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.611 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.611 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.611 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-6deca81674' May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.641 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.663 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.682 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.688 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.697 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.128/26 host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.697 [INFO][4686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.128/26 handle="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.703 [INFO][4686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322 May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.717 [INFO][4686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.128/26 handle="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.731 [INFO][4686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.136/26] block=192.168.81.128/26 handle="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.732 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.136/26] handle="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" host="ci-4081.3.3-n-6deca81674" May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.732 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:53.830737 containerd[1459]: 2025-05-17 00:21:53.732 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.136/26] IPv6=[] ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" HandleID="k8s-pod-network.8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.744 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d63156b5-315e-4424-8e51-f55b1ec001db", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"", Pod:"coredns-674b8bbfcf-n9mg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdbdd55df4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.745 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.136/32] ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.745 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdbdd55df4d ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.780 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.788 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d63156b5-315e-4424-8e51-f55b1ec001db", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322", Pod:"coredns-674b8bbfcf-n9mg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdbdd55df4d", MAC:"b2:c8:cd:2a:7f:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:53.835067 containerd[1459]: 2025-05-17 00:21:53.816 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9mg8" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:21:53.899534 containerd[1459]: time="2025-05-17T00:21:53.898723260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:53.900720 containerd[1459]: time="2025-05-17T00:21:53.898845375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:53.901093 containerd[1459]: time="2025-05-17T00:21:53.900910920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:53.902247 containerd[1459]: time="2025-05-17T00:21:53.902137323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:53.989067 systemd[1]: Started cri-containerd-8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322.scope - libcontainer container 8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322. May 17 00:21:54.047982 systemd-networkd[1370]: calib058d04a49b: Gained IPv6LL May 17 00:21:54.194996 containerd[1459]: time="2025-05-17T00:21:54.194591908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9mg8,Uid:d63156b5-315e-4424-8e51-f55b1ec001db,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322\"" May 17 00:21:54.201566 kubelet[2458]: E0517 00:21:54.199283 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:54.216900 containerd[1459]: time="2025-05-17T00:21:54.215982844Z" level=info msg="CreateContainer within sandbox \"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:54.248145 containerd[1459]: time="2025-05-17T00:21:54.247899924Z" level=info msg="CreateContainer within sandbox \"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7870c4b3d1d9de78e3b8635bf43cb204b26d60e4a375325c5c05926f443de596\"" May 17 00:21:54.251732 containerd[1459]: time="2025-05-17T00:21:54.251513075Z" level=info msg="StartContainer for \"7870c4b3d1d9de78e3b8635bf43cb204b26d60e4a375325c5c05926f443de596\"" May 17 00:21:54.361114 containerd[1459]: time="2025-05-17T00:21:54.360353715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:54.361114 containerd[1459]: time="2025-05-17T00:21:54.360822048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:21:54.363935 systemd[1]: Started cri-containerd-7870c4b3d1d9de78e3b8635bf43cb204b26d60e4a375325c5c05926f443de596.scope - libcontainer container 7870c4b3d1d9de78e3b8635bf43cb204b26d60e4a375325c5c05926f443de596. May 17 00:21:54.368510 containerd[1459]: time="2025-05-17T00:21:54.368010333Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:54.375268 containerd[1459]: time="2025-05-17T00:21:54.375171830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:54.379119 containerd[1459]: time="2025-05-17T00:21:54.378624702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.543057834s" May 17 00:21:54.379119 containerd[1459]: time="2025-05-17T00:21:54.378687223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:21:54.385298 containerd[1459]: time="2025-05-17T00:21:54.385235743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:21:54.416465 containerd[1459]: time="2025-05-17T00:21:54.416411123Z" level=info msg="CreateContainer within sandbox \"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:21:54.464772 containerd[1459]: time="2025-05-17T00:21:54.464702604Z" level=info msg="CreateContainer within sandbox \"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff\"" May 17 00:21:54.466961 containerd[1459]: time="2025-05-17T00:21:54.466907002Z" level=info msg="StartContainer for \"fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff\"" May 17 00:21:54.492676 systemd-networkd[1370]: calic68c10de7a4: Gained IPv6LL May 17 00:21:54.504512 containerd[1459]: time="2025-05-17T00:21:54.503906585Z" level=info msg="StartContainer for \"7870c4b3d1d9de78e3b8635bf43cb204b26d60e4a375325c5c05926f443de596\" returns successfully" May 17 00:21:54.557621 kubelet[2458]: E0517 00:21:54.557371 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:54.568888 systemd[1]: Started cri-containerd-fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff.scope - libcontainer container fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff. May 17 00:21:54.589967 kubelet[2458]: I0517 00:21:54.589773 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n9mg8" podStartSLOduration=42.589755667 podStartE2EDuration="42.589755667s" podCreationTimestamp="2025-05-17 00:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:54.589613412 +0000 UTC m=+47.821896196" watchObservedRunningTime="2025-05-17 00:21:54.589755667 +0000 UTC m=+47.822038451" May 17 00:21:54.731601 containerd[1459]: time="2025-05-17T00:21:54.731241321Z" level=info msg="StartContainer for \"fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff\" returns successfully" May 17 00:21:54.812832 systemd-networkd[1370]: calicdbdd55df4d: Gained IPv6LL May 17 00:21:55.562281 kubelet[2458]: E0517 00:21:55.562225 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:55.584064 kubelet[2458]: I0517 00:21:55.583254 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f8bd85b9-zb9lw" podStartSLOduration=24.036020713 podStartE2EDuration="28.583227996s" podCreationTimestamp="2025-05-17 00:21:27 +0000 UTC" firstStartedPulling="2025-05-17 00:21:49.834194228 +0000 UTC m=+43.066476992" lastFinishedPulling="2025-05-17 00:21:54.381401511 +0000 UTC m=+47.613684275" observedRunningTime="2025-05-17 00:21:55.581544477 +0000 UTC m=+48.813827257" watchObservedRunningTime="2025-05-17 00:21:55.583227996 +0000 UTC m=+48.815510779" May 17 00:21:55.862705 containerd[1459]: time="2025-05-17T00:21:55.861716590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:55.863321 containerd[1459]: time="2025-05-17T00:21:55.862993667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:21:55.863875 containerd[1459]: time="2025-05-17T00:21:55.863835925Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:55.866938 containerd[1459]: time="2025-05-17T00:21:55.866867279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:55.868884 containerd[1459]: time="2025-05-17T00:21:55.868825283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.483538636s" May 17 00:21:55.869252 containerd[1459]: time="2025-05-17T00:21:55.869115152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:21:55.871124 containerd[1459]: time="2025-05-17T00:21:55.870442923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:21:55.877858 containerd[1459]: time="2025-05-17T00:21:55.877497022Z" level=info msg="CreateContainer within sandbox \"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:21:55.898676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795233581.mount: Deactivated successfully. May 17 00:21:55.902707 containerd[1459]: time="2025-05-17T00:21:55.902639415Z" level=info msg="CreateContainer within sandbox \"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801\"" May 17 00:21:55.903652 containerd[1459]: time="2025-05-17T00:21:55.903614570Z" level=info msg="StartContainer for \"fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801\"" May 17 00:21:55.972535 systemd[1]: Started cri-containerd-fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801.scope - libcontainer container fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801. May 17 00:21:56.026276 containerd[1459]: time="2025-05-17T00:21:56.026182352Z" level=info msg="StartContainer for \"fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801\" returns successfully" May 17 00:21:56.182180 containerd[1459]: time="2025-05-17T00:21:56.182117470Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:56.183266 containerd[1459]: time="2025-05-17T00:21:56.183188528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:56.183436 containerd[1459]: time="2025-05-17T00:21:56.183343647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:21:56.183870 kubelet[2458]: E0517 00:21:56.183650 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:56.183870 kubelet[2458]: E0517 00:21:56.183715 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:56.186103 containerd[1459]: time="2025-05-17T00:21:56.184544244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:21:56.200216 kubelet[2458]: E0517 00:21:56.200011 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc6t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l9z28_calico-system(c8e662ac-310d-4631-a4cc-a86f8c336b26): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:56.201826 kubelet[2458]: E0517 00:21:56.201750 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:21:56.286697 systemd[1]: run-containerd-runc-k8s.io-fa8adc3fbe56a1d07a1119f6077d7206f2416831423164724e55d4db83442801-runc.UBPvg2.mount: Deactivated successfully. May 17 00:21:56.569121 kubelet[2458]: E0517 00:21:56.567996 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:56.569952 kubelet[2458]: E0517 00:21:56.569895 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:21:56.574282 kubelet[2458]: I0517 00:21:56.574225 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:57.587254 kubelet[2458]: E0517 00:21:57.587210 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:21:58.402049 systemd[1]: Started sshd@7-64.23.130.50:22-139.178.68.195:58808.service - OpenSSH per-connection server daemon (139.178.68.195:58808). May 17 00:21:58.579948 sshd[4894]: Accepted publickey for core from 139.178.68.195 port 58808 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:21:58.584832 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:58.600744 systemd-logind[1445]: New session 8 of user core. May 17 00:21:58.606863 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:21:59.573330 containerd[1459]: time="2025-05-17T00:21:59.573152496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:59.575356 containerd[1459]: time="2025-05-17T00:21:59.575296431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:21:59.577562 containerd[1459]: time="2025-05-17T00:21:59.576742481Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:59.579646 containerd[1459]: time="2025-05-17T00:21:59.579606883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:59.581783 containerd[1459]: time="2025-05-17T00:21:59.581743463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.397163085s" May 17 00:21:59.581933 containerd[1459]: time="2025-05-17T00:21:59.581918059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:21:59.583547 containerd[1459]: time="2025-05-17T00:21:59.583171200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:21:59.585900 sshd[4894]: pam_unix(sshd:session): session closed for user core May 17 00:21:59.593242 containerd[1459]: time="2025-05-17T00:21:59.592546173Z" level=info msg="CreateContainer within sandbox \"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:21:59.602161 systemd[1]: sshd@7-64.23.130.50:22-139.178.68.195:58808.service: Deactivated successfully. May 17 00:21:59.607193 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:21:59.610673 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. May 17 00:21:59.628558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785391570.mount: Deactivated successfully. May 17 00:21:59.632487 containerd[1459]: time="2025-05-17T00:21:59.631726936Z" level=info msg="CreateContainer within sandbox \"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c79153e6e04d657b55ec5113c52ccae0c8250d329ba668c72f2d5538427091f\"" May 17 00:21:59.632340 systemd-logind[1445]: Removed session 8. May 17 00:21:59.636708 containerd[1459]: time="2025-05-17T00:21:59.635172967Z" level=info msg="StartContainer for \"5c79153e6e04d657b55ec5113c52ccae0c8250d329ba668c72f2d5538427091f\"" May 17 00:21:59.802815 systemd[1]: Started cri-containerd-5c79153e6e04d657b55ec5113c52ccae0c8250d329ba668c72f2d5538427091f.scope - libcontainer container 5c79153e6e04d657b55ec5113c52ccae0c8250d329ba668c72f2d5538427091f. May 17 00:21:59.885199 containerd[1459]: time="2025-05-17T00:21:59.885155923Z" level=info msg="StartContainer for \"5c79153e6e04d657b55ec5113c52ccae0c8250d329ba668c72f2d5538427091f\" returns successfully" May 17 00:22:00.078283 containerd[1459]: time="2025-05-17T00:22:00.078200577Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:00.081948 containerd[1459]: time="2025-05-17T00:22:00.081854578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:22:00.087490 containerd[1459]: time="2025-05-17T00:22:00.087395006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 500.143468ms" May 17 00:22:00.087490 containerd[1459]: time="2025-05-17T00:22:00.087478886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:00.090228 containerd[1459]: time="2025-05-17T00:22:00.090174335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:22:00.103950 containerd[1459]: time="2025-05-17T00:22:00.103841902Z" level=info msg="CreateContainer within sandbox \"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:00.175544 containerd[1459]: time="2025-05-17T00:22:00.175117416Z" level=info msg="CreateContainer within sandbox \"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a266189c9f340500eb236ba26c064e8e06e4a20c1a627d5d885aacc606b3914\"" May 17 00:22:00.180180 containerd[1459]: time="2025-05-17T00:22:00.179164018Z" level=info msg="StartContainer for \"4a266189c9f340500eb236ba26c064e8e06e4a20c1a627d5d885aacc606b3914\"" May 17 00:22:00.250009 systemd[1]: Started cri-containerd-4a266189c9f340500eb236ba26c064e8e06e4a20c1a627d5d885aacc606b3914.scope - libcontainer container 4a266189c9f340500eb236ba26c064e8e06e4a20c1a627d5d885aacc606b3914. May 17 00:22:00.387438 containerd[1459]: time="2025-05-17T00:22:00.387364840Z" level=info msg="StartContainer for \"4a266189c9f340500eb236ba26c064e8e06e4a20c1a627d5d885aacc606b3914\" returns successfully" May 17 00:22:00.679639 kubelet[2458]: I0517 00:22:00.677972 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5759bc5bd9-rnppq" podStartSLOduration=31.978206126 podStartE2EDuration="38.677923877s" podCreationTimestamp="2025-05-17 00:21:22 +0000 UTC" firstStartedPulling="2025-05-17 00:21:53.390123303 +0000 UTC m=+46.622406080" lastFinishedPulling="2025-05-17 00:22:00.089841052 +0000 UTC m=+53.322123831" observedRunningTime="2025-05-17 00:22:00.666601242 +0000 UTC m=+53.898884231" watchObservedRunningTime="2025-05-17 00:22:00.677923877 +0000 UTC m=+53.910206663" May 17 00:22:01.527896 kubelet[2458]: I0517 00:22:01.527748 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:01.652884 kubelet[2458]: I0517 00:22:01.652819 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:01.653728 kubelet[2458]: I0517 00:22:01.653692 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:01.717120 systemd[1]: run-containerd-runc-k8s.io-fda05621fdb0f9b2cfb7b969f9533fff4bea2c9322a12e46d93367c976c5a9ff-runc.3pgmcS.mount: Deactivated successfully. May 17 00:22:01.977014 kubelet[2458]: I0517 00:22:01.976452 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5759bc5bd9-g9kg6" podStartSLOduration=33.719648627 podStartE2EDuration="39.976417585s" podCreationTimestamp="2025-05-17 00:21:22 +0000 UTC" firstStartedPulling="2025-05-17 00:21:53.326182113 +0000 UTC m=+46.558464889" lastFinishedPulling="2025-05-17 00:21:59.582951065 +0000 UTC m=+52.815233847" observedRunningTime="2025-05-17 00:22:00.737272671 +0000 UTC m=+53.969555459" watchObservedRunningTime="2025-05-17 00:22:01.976417585 +0000 UTC m=+55.208700374" May 17 00:22:02.636167 containerd[1459]: time="2025-05-17T00:22:02.635877474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.639224 containerd[1459]: time="2025-05-17T00:22:02.638728125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:22:02.640803 containerd[1459]: time="2025-05-17T00:22:02.640301357Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.646544 containerd[1459]: time="2025-05-17T00:22:02.646446644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.651047 containerd[1459]: time="2025-05-17T00:22:02.649966938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.559733947s" May 17 00:22:02.651047 containerd[1459]: time="2025-05-17T00:22:02.650030466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:22:02.731100 containerd[1459]: time="2025-05-17T00:22:02.730399678Z" level=info msg="CreateContainer within sandbox \"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:22:02.763334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969509304.mount: Deactivated successfully. May 17 00:22:02.860800 containerd[1459]: time="2025-05-17T00:22:02.860727004Z" level=info msg="CreateContainer within sandbox \"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6f62225148326af50976fa6875bfd838759ba14d7e21eafa681a95d1640d8d81\"" May 17 00:22:02.864418 containerd[1459]: time="2025-05-17T00:22:02.863062517Z" level=info msg="StartContainer for \"6f62225148326af50976fa6875bfd838759ba14d7e21eafa681a95d1640d8d81\"" May 17 00:22:02.982220 containerd[1459]: time="2025-05-17T00:22:02.982046938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:02.994298 systemd[1]: Started cri-containerd-6f62225148326af50976fa6875bfd838759ba14d7e21eafa681a95d1640d8d81.scope - libcontainer container 6f62225148326af50976fa6875bfd838759ba14d7e21eafa681a95d1640d8d81. May 17 00:22:03.094941 containerd[1459]: time="2025-05-17T00:22:03.094750954Z" level=info msg="StartContainer for \"6f62225148326af50976fa6875bfd838759ba14d7e21eafa681a95d1640d8d81\" returns successfully" May 17 00:22:03.330683 containerd[1459]: time="2025-05-17T00:22:03.330475705Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:03.336977 containerd[1459]: time="2025-05-17T00:22:03.334220921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:03.336977 containerd[1459]: time="2025-05-17T00:22:03.336449842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:03.338569 kubelet[2458]: E0517 00:22:03.338493 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:03.339487 kubelet[2458]: E0517 00:22:03.338588 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:03.347690 kubelet[2458]: E0517 00:22:03.347603 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7f28aac9d54108bed1a27017a72b7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:03.353951 containerd[1459]: time="2025-05-17T00:22:03.352912116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:03.624344 containerd[1459]: time="2025-05-17T00:22:03.623424948Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:03.625478 containerd[1459]: time="2025-05-17T00:22:03.625407313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:03.626384 containerd[1459]: time="2025-05-17T00:22:03.626161692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:03.627673 kubelet[2458]: E0517 00:22:03.626749 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:03.627673 kubelet[2458]: E0517 00:22:03.626818 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:03.627673 kubelet[2458]: E0517 00:22:03.626997 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:03.635832 kubelet[2458]: E0517 00:22:03.635589 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:22:03.692374 kubelet[2458]: I0517 00:22:03.690514 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-789m7" podStartSLOduration=25.640341845000002 podStartE2EDuration="37.690485352s" podCreationTimestamp="2025-05-17 00:21:26 +0000 UTC" firstStartedPulling="2025-05-17 00:21:50.674273864 +0000 UTC m=+43.906556640" lastFinishedPulling="2025-05-17 00:22:02.724417357 +0000 UTC m=+55.956700147" observedRunningTime="2025-05-17 00:22:03.687464158 +0000 UTC m=+56.919746945" watchObservedRunningTime="2025-05-17 00:22:03.690485352 +0000 UTC m=+56.922768141" May 17 00:22:04.610373 kubelet[2458]: I0517 00:22:04.610113 2458 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:22:04.620382 kubelet[2458]: I0517 00:22:04.620308 2458 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:22:04.632975 systemd[1]: Started sshd@8-64.23.130.50:22-139.178.68.195:51708.service - OpenSSH per-connection server daemon (139.178.68.195:51708). May 17 00:22:04.910660 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 51708 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:04.921107 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:04.933578 systemd-logind[1445]: New session 9 of user core. May 17 00:22:04.939005 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:22:05.547064 sshd[5089]: pam_unix(sshd:session): session closed for user core May 17 00:22:05.553696 systemd[1]: sshd@8-64.23.130.50:22-139.178.68.195:51708.service: Deactivated successfully. May 17 00:22:05.558119 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:22:05.559952 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. May 17 00:22:05.561739 systemd-logind[1445]: Removed session 9. May 17 00:22:07.105100 containerd[1459]: time="2025-05-17T00:22:07.104783236Z" level=info msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.446 [WARNING][5118] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8567c791-2cba-41d1-9dcc-a920a450b5ec", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b", Pod:"coredns-674b8bbfcf-k8sjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c174d889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.449 [INFO][5118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.449 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" iface="eth0" netns="" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.449 [INFO][5118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.449 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.650 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.654 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.656 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.674 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.674 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.677 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:07.684172 containerd[1459]: 2025-05-17 00:22:07.680 [INFO][5118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:07.691281 containerd[1459]: time="2025-05-17T00:22:07.691051932Z" level=info msg="TearDown network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" successfully" May 17 00:22:07.691281 containerd[1459]: time="2025-05-17T00:22:07.691124252Z" level=info msg="StopPodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" returns successfully" May 17 00:22:07.838480 containerd[1459]: time="2025-05-17T00:22:07.838171653Z" level=info msg="RemovePodSandbox for \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" May 17 00:22:07.844342 containerd[1459]: time="2025-05-17T00:22:07.844252856Z" level=info msg="Forcibly stopping sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\"" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.946 [WARNING][5139] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8567c791-2cba-41d1-9dcc-a920a450b5ec", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"da0c69397c2007848f3404da3bfa6a18ab078b1b27f511ab7859b7281bf5d89b", Pod:"coredns-674b8bbfcf-k8sjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c174d889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.947 [INFO][5139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.947 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" iface="eth0" netns="" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.947 [INFO][5139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.947 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.998 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.999 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:07.999 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:08.009 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:08.009 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" HandleID="k8s-pod-network.f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--k8sjh-eth0" May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:08.017 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.035996 containerd[1459]: 2025-05-17 00:22:08.029 [INFO][5139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f" May 17 00:22:08.035996 containerd[1459]: time="2025-05-17T00:22:08.033708914Z" level=info msg="TearDown network for sandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" successfully" May 17 00:22:08.045609 containerd[1459]: time="2025-05-17T00:22:08.045137007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:08.060865 containerd[1459]: time="2025-05-17T00:22:08.060772939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:08.083149 containerd[1459]: time="2025-05-17T00:22:08.083070611Z" level=info msg="RemovePodSandbox \"f3ef3683d95a1f95c579218f071135c32265faf815f5f6fa0dc2a04150c8011f\" returns successfully" May 17 00:22:08.098485 containerd[1459]: time="2025-05-17T00:22:08.098428430Z" level=info msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.185 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d63156b5-315e-4424-8e51-f55b1ec001db", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322", Pod:"coredns-674b8bbfcf-n9mg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdbdd55df4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.185 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.185 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" iface="eth0" netns="" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.185 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.185 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.224 [INFO][5171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.224 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.224 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.233 [WARNING][5171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.233 [INFO][5171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.235 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.242077 containerd[1459]: 2025-05-17 00:22:08.239 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.244203 containerd[1459]: time="2025-05-17T00:22:08.242126855Z" level=info msg="TearDown network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" successfully" May 17 00:22:08.244203 containerd[1459]: time="2025-05-17T00:22:08.242153532Z" level=info msg="StopPodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" returns successfully" May 17 00:22:08.244315 containerd[1459]: time="2025-05-17T00:22:08.244278085Z" level=info msg="RemovePodSandbox for \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" May 17 00:22:08.244358 containerd[1459]: time="2025-05-17T00:22:08.244329978Z" level=info msg="Forcibly stopping sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\"" May 17 00:22:08.283114 containerd[1459]: time="2025-05-17T00:22:08.282362420Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:08.284760 containerd[1459]: time="2025-05-17T00:22:08.284227905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:08.284760 containerd[1459]: time="2025-05-17T00:22:08.284341783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:08.287679 kubelet[2458]: E0517 00:22:08.287430 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:08.294623 kubelet[2458]: E0517 00:22:08.294560 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:08.306536 kubelet[2458]: E0517 00:22:08.305896 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc6t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l9z28_calico-system(c8e662ac-310d-4631-a4cc-a86f8c336b26): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:08.307605 kubelet[2458]: E0517 00:22:08.307507 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.323 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d63156b5-315e-4424-8e51-f55b1ec001db", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"8a14ed73f7724612e0160b5c7ee094fa9f54b39a843344ba225ecb0f74f1e322", Pod:"coredns-674b8bbfcf-n9mg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdbdd55df4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.325 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.325 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" iface="eth0" netns="" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.325 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.325 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.376 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.376 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.376 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.392 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.392 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" HandleID="k8s-pod-network.14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" Workload="ci--4081.3.3--n--6deca81674-k8s-coredns--674b8bbfcf--n9mg8-eth0" May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.395 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.409195 containerd[1459]: 2025-05-17 00:22:08.404 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357" May 17 00:22:08.410033 containerd[1459]: time="2025-05-17T00:22:08.409262714Z" level=info msg="TearDown network for sandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" successfully" May 17 00:22:08.414094 containerd[1459]: time="2025-05-17T00:22:08.414025933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:08.414607 containerd[1459]: time="2025-05-17T00:22:08.414146143Z" level=info msg="RemovePodSandbox \"14e35921940cdbcb723e7115598dfaa47deef4ff97a87f924bd85b468fb1d357\" returns successfully" May 17 00:22:08.421641 containerd[1459]: time="2025-05-17T00:22:08.420898740Z" level=info msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.481 [WARNING][5207] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.482 [INFO][5207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.482 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" iface="eth0" netns="" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.482 [INFO][5207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.482 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.526 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.527 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.527 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.537 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.538 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.546 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.555274 containerd[1459]: 2025-05-17 00:22:08.550 [INFO][5207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.557071 containerd[1459]: time="2025-05-17T00:22:08.555238188Z" level=info msg="TearDown network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" successfully" May 17 00:22:08.557490 containerd[1459]: time="2025-05-17T00:22:08.557078233Z" level=info msg="StopPodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" returns successfully" May 17 00:22:08.564237 containerd[1459]: time="2025-05-17T00:22:08.564181513Z" level=info msg="RemovePodSandbox for \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" May 17 00:22:08.564237 containerd[1459]: time="2025-05-17T00:22:08.564242790Z" level=info msg="Forcibly stopping sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\"" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.627 [WARNING][5228] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" WorkloadEndpoint="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.627 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.627 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" iface="eth0" netns="" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.627 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.627 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.664 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.664 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.664 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.674 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.674 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" HandleID="k8s-pod-network.64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" Workload="ci--4081.3.3--n--6deca81674-k8s-whisker--64f4887fbc--jcxt9-eth0" May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.677 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.707862 containerd[1459]: 2025-05-17 00:22:08.679 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735" May 17 00:22:08.708748 containerd[1459]: time="2025-05-17T00:22:08.708032866Z" level=info msg="TearDown network for sandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" successfully" May 17 00:22:08.712336 containerd[1459]: time="2025-05-17T00:22:08.712278723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:08.712619 containerd[1459]: time="2025-05-17T00:22:08.712362425Z" level=info msg="RemovePodSandbox \"64573f90c9a4988d5ae13358b9692acaabd8ad94f0826300c2e60f7472036735\" returns successfully" May 17 00:22:08.713488 containerd[1459]: time="2025-05-17T00:22:08.713029412Z" level=info msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.777 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"877588de-b588-49a3-a0b6-58a44269c024", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c", Pod:"calico-apiserver-5759bc5bd9-g9kg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib058d04a49b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.778 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.778 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" iface="eth0" netns="" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.778 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.778 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.810 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.810 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.810 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.820 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.820 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.823 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.829581 containerd[1459]: 2025-05-17 00:22:08.826 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.830969 containerd[1459]: time="2025-05-17T00:22:08.830126324Z" level=info msg="TearDown network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" successfully" May 17 00:22:08.830969 containerd[1459]: time="2025-05-17T00:22:08.830163841Z" level=info msg="StopPodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" returns successfully" May 17 00:22:08.832155 containerd[1459]: time="2025-05-17T00:22:08.831776688Z" level=info msg="RemovePodSandbox for \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" May 17 00:22:08.832155 containerd[1459]: time="2025-05-17T00:22:08.831836729Z" level=info msg="Forcibly stopping sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\"" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.906 [WARNING][5270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"877588de-b588-49a3-a0b6-58a44269c024", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f4b3dd5bb957e6d993e71a515370951c5504f6c483af5f4b3941a0dba7dd2e1c", Pod:"calico-apiserver-5759bc5bd9-g9kg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib058d04a49b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.907 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.907 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" iface="eth0" netns="" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.907 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.907 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.957 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.957 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.958 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.969 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.969 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" HandleID="k8s-pod-network.ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--g9kg6-eth0" May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.973 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.981642 containerd[1459]: 2025-05-17 00:22:08.978 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0" May 17 00:22:08.982978 containerd[1459]: time="2025-05-17T00:22:08.981694325Z" level=info msg="TearDown network for sandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" successfully" May 17 00:22:08.984741 containerd[1459]: time="2025-05-17T00:22:08.984677078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:08.984899 containerd[1459]: time="2025-05-17T00:22:08.984772677Z" level=info msg="RemovePodSandbox \"ca08b4c35373f297f29a005b6f4716054098a2d25a214381db9c15cb57b2faa0\" returns successfully" May 17 00:22:08.985464 containerd[1459]: time="2025-05-17T00:22:08.985439579Z" level=info msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.053 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"c8e662ac-310d-4631-a4cc-a86f8c336b26", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e", Pod:"goldmane-78d55f7ddc-l9z28", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6df082cce77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.054 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.054 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" iface="eth0" netns="" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.054 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.054 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.099 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.099 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.099 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.115 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.116 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.121 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.130979 containerd[1459]: 2025-05-17 00:22:09.125 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.130979 containerd[1459]: time="2025-05-17T00:22:09.130765086Z" level=info msg="TearDown network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" successfully" May 17 00:22:09.130979 containerd[1459]: time="2025-05-17T00:22:09.130789725Z" level=info msg="StopPodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" returns successfully" May 17 00:22:09.134090 containerd[1459]: time="2025-05-17T00:22:09.133175216Z" level=info msg="RemovePodSandbox for \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" May 17 00:22:09.134090 containerd[1459]: time="2025-05-17T00:22:09.133233609Z" level=info msg="Forcibly stopping sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\"" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.250 [WARNING][5319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"c8e662ac-310d-4631-a4cc-a86f8c336b26", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"fd5356e11a6f4303463e9f8486050632689a77767953085fd8fb469d4f63624e", Pod:"goldmane-78d55f7ddc-l9z28", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6df082cce77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.250 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.250 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" iface="eth0" netns="" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.250 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.250 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.299 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.299 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.299 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.311 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.311 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" HandleID="k8s-pod-network.3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" Workload="ci--4081.3.3--n--6deca81674-k8s-goldmane--78d55f7ddc--l9z28-eth0" May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.313 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.320045 containerd[1459]: 2025-05-17 00:22:09.316 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed" May 17 00:22:09.321894 containerd[1459]: time="2025-05-17T00:22:09.320265580Z" level=info msg="TearDown network for sandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" successfully" May 17 00:22:09.323389 containerd[1459]: time="2025-05-17T00:22:09.323337269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:09.323549 containerd[1459]: time="2025-05-17T00:22:09.323427508Z" level=info msg="RemovePodSandbox \"3bdf3a0c5969abb17f4b6030e9f6145018c25e905154653f601ef0302f17b5ed\" returns successfully" May 17 00:22:09.331067 containerd[1459]: time="2025-05-17T00:22:09.330996552Z" level=info msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.384 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0", GenerateName:"calico-kube-controllers-7f8bd85b9-", Namespace:"calico-system", SelfLink:"", UID:"535f48a8-c7ba-4110-afb4-e41a21f02377", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8bd85b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f", Pod:"calico-kube-controllers-7f8bd85b9-zb9lw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib88c5cffaaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.384 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.384 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" iface="eth0" netns="" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.384 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.384 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.416 [INFO][5347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.416 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.416 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.426 [WARNING][5347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.426 [INFO][5347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.429 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.435511 containerd[1459]: 2025-05-17 00:22:09.432 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.437933 containerd[1459]: time="2025-05-17T00:22:09.435584146Z" level=info msg="TearDown network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" successfully" May 17 00:22:09.437933 containerd[1459]: time="2025-05-17T00:22:09.435612139Z" level=info msg="StopPodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" returns successfully" May 17 00:22:09.437933 containerd[1459]: time="2025-05-17T00:22:09.436834165Z" level=info msg="RemovePodSandbox for \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" May 17 00:22:09.437933 containerd[1459]: time="2025-05-17T00:22:09.436890698Z" level=info msg="Forcibly stopping sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\"" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.489 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0", GenerateName:"calico-kube-controllers-7f8bd85b9-", Namespace:"calico-system", SelfLink:"", UID:"535f48a8-c7ba-4110-afb4-e41a21f02377", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8bd85b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"f18b50015cecb889dffa1849b2bef8a810342da4cea4a735ffb781a48978a22f", Pod:"calico-kube-controllers-7f8bd85b9-zb9lw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib88c5cffaaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.490 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.490 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" iface="eth0" netns="" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.490 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.490 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.524 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.524 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.524 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.533 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.533 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" HandleID="k8s-pod-network.95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--kube--controllers--7f8bd85b9--zb9lw-eth0" May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.536 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.542761 containerd[1459]: 2025-05-17 00:22:09.539 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426" May 17 00:22:09.544934 containerd[1459]: time="2025-05-17T00:22:09.543282211Z" level=info msg="TearDown network for sandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" successfully" May 17 00:22:09.547666 containerd[1459]: time="2025-05-17T00:22:09.547270160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:09.547666 containerd[1459]: time="2025-05-17T00:22:09.547424003Z" level=info msg="RemovePodSandbox \"95e1f9f52bc377d7585201a4fbbbe2e3b3a191fef5de96b63375abd7fc1be426\" returns successfully" May 17 00:22:09.548390 containerd[1459]: time="2025-05-17T00:22:09.548341345Z" level=info msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.600 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a01dd2-ca4a-48d8-b676-505f65a04723", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d", Pod:"calico-apiserver-5759bc5bd9-rnppq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic68c10de7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.601 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.601 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" iface="eth0" netns="" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.601 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.601 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.648 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.648 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.648 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.656 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.656 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.659 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.665724 containerd[1459]: 2025-05-17 00:22:09.662 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.665724 containerd[1459]: time="2025-05-17T00:22:09.665510549Z" level=info msg="TearDown network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" successfully" May 17 00:22:09.665724 containerd[1459]: time="2025-05-17T00:22:09.665591211Z" level=info msg="StopPodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" returns successfully" May 17 00:22:09.668438 containerd[1459]: time="2025-05-17T00:22:09.666996254Z" level=info msg="RemovePodSandbox for \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" May 17 00:22:09.668438 containerd[1459]: time="2025-05-17T00:22:09.667034475Z" level=info msg="Forcibly stopping sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\"" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.718 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0", GenerateName:"calico-apiserver-5759bc5bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a01dd2-ca4a-48d8-b676-505f65a04723", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5759bc5bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"64f0f9d8c4f0a1f42be9888aedce1f9e918abb96adf8afe11c2defbd296e8d1d", Pod:"calico-apiserver-5759bc5bd9-rnppq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic68c10de7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.719 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.719 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" iface="eth0" netns="" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.719 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.719 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.756 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.756 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.756 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.765 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.765 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" HandleID="k8s-pod-network.68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" Workload="ci--4081.3.3--n--6deca81674-k8s-calico--apiserver--5759bc5bd9--rnppq-eth0" May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.768 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.775051 containerd[1459]: 2025-05-17 00:22:09.772 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62" May 17 00:22:09.776626 containerd[1459]: time="2025-05-17T00:22:09.775002797Z" level=info msg="TearDown network for sandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" successfully" May 17 00:22:09.782480 containerd[1459]: time="2025-05-17T00:22:09.782242726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:09.782480 containerd[1459]: time="2025-05-17T00:22:09.782334772Z" level=info msg="RemovePodSandbox \"68bfc848cdc8be41830aa11a05f1538ed882acafdd7acd5a31648f3d45456a62\" returns successfully" May 17 00:22:09.783152 containerd[1459]: time="2025-05-17T00:22:09.783101824Z" level=info msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.834 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a", ResourceVersion:"1187", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c", Pod:"csi-node-driver-789m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dca463b334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.835 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.835 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" iface="eth0" netns="" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.835 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.835 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.870 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.870 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.870 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.880 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.880 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.882 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.888737 containerd[1459]: 2025-05-17 00:22:09.886 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:09.889745 containerd[1459]: time="2025-05-17T00:22:09.888792998Z" level=info msg="TearDown network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" successfully" May 17 00:22:09.889745 containerd[1459]: time="2025-05-17T00:22:09.888838248Z" level=info msg="StopPodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" returns successfully" May 17 00:22:09.889745 containerd[1459]: time="2025-05-17T00:22:09.889413778Z" level=info msg="RemovePodSandbox for \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" May 17 00:22:09.889745 containerd[1459]: time="2025-05-17T00:22:09.889445847Z" level=info msg="Forcibly stopping sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\"" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.945 [WARNING][5445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3741cab5-ed2e-41cc-bf79-c5ceb8c1246a", ResourceVersion:"1187", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-6deca81674", ContainerID:"15ea6d11fcb8e73fa18ed7510bbdb40a007f4f766602970523d2a326945d163c", Pod:"csi-node-driver-789m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dca463b334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.945 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.945 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" iface="eth0" netns="" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.945 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.945 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.982 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.982 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.982 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.992 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.992 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" HandleID="k8s-pod-network.2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" Workload="ci--4081.3.3--n--6deca81674-k8s-csi--node--driver--789m7-eth0" May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:09.998 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:10.007283 containerd[1459]: 2025-05-17 00:22:10.003 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31" May 17 00:22:10.007283 containerd[1459]: time="2025-05-17T00:22:10.007409603Z" level=info msg="TearDown network for sandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" successfully" May 17 00:22:10.011944 containerd[1459]: time="2025-05-17T00:22:10.011818207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:10.012506 containerd[1459]: time="2025-05-17T00:22:10.012221644Z" level=info msg="RemovePodSandbox \"2f676eb503db4df4ef6f7efa58affa80f23dd9fd71139a9fd6ace2d07714fc31\" returns successfully" May 17 00:22:10.571066 systemd[1]: Started sshd@9-64.23.130.50:22-139.178.68.195:51714.service - OpenSSH per-connection server daemon (139.178.68.195:51714). May 17 00:22:10.729416 sshd[5459]: Accepted publickey for core from 139.178.68.195 port 51714 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:10.734749 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:10.744743 systemd-logind[1445]: New session 10 of user core. May 17 00:22:10.757964 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:22:11.105701 sshd[5459]: pam_unix(sshd:session): session closed for user core May 17 00:22:11.119921 systemd[1]: sshd@9-64.23.130.50:22-139.178.68.195:51714.service: Deactivated successfully. May 17 00:22:11.123624 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:22:11.125888 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. May 17 00:22:11.136295 systemd[1]: Started sshd@10-64.23.130.50:22-139.178.68.195:51728.service - OpenSSH per-connection server daemon (139.178.68.195:51728). May 17 00:22:11.138342 systemd-logind[1445]: Removed session 10. May 17 00:22:11.196927 sshd[5473]: Accepted publickey for core from 139.178.68.195 port 51728 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:11.199704 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:11.207498 systemd-logind[1445]: New session 11 of user core. May 17 00:22:11.212207 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:22:11.462905 sshd[5473]: pam_unix(sshd:session): session closed for user core May 17 00:22:11.476373 systemd[1]: sshd@10-64.23.130.50:22-139.178.68.195:51728.service: Deactivated successfully. May 17 00:22:11.479153 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:22:11.481406 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. May 17 00:22:11.492005 systemd[1]: Started sshd@11-64.23.130.50:22-139.178.68.195:51742.service - OpenSSH per-connection server daemon (139.178.68.195:51742). May 17 00:22:11.496304 systemd-logind[1445]: Removed session 11. May 17 00:22:11.557145 sshd[5484]: Accepted publickey for core from 139.178.68.195 port 51742 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:11.559670 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:11.567566 systemd-logind[1445]: New session 12 of user core. May 17 00:22:11.577912 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:22:11.761002 sshd[5484]: pam_unix(sshd:session): session closed for user core May 17 00:22:11.766093 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. May 17 00:22:11.766366 systemd[1]: sshd@11-64.23.130.50:22-139.178.68.195:51742.service: Deactivated successfully. May 17 00:22:11.769409 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:22:11.773170 systemd-logind[1445]: Removed session 12. May 17 00:22:14.336578 kubelet[2458]: I0517 00:22:14.335760 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:15.983612 kubelet[2458]: E0517 00:22:15.982982 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:22:16.079205 kubelet[2458]: I0517 00:22:16.079152 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:16.782923 systemd[1]: Started sshd@12-64.23.130.50:22-139.178.68.195:45758.service - OpenSSH per-connection server daemon (139.178.68.195:45758). May 17 00:22:16.849873 sshd[5511]: Accepted publickey for core from 139.178.68.195 port 45758 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:16.852595 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:16.858848 systemd-logind[1445]: New session 13 of user core. May 17 00:22:16.864890 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:22:17.204720 sshd[5511]: pam_unix(sshd:session): session closed for user core May 17 00:22:17.212328 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. May 17 00:22:17.213411 systemd[1]: sshd@12-64.23.130.50:22-139.178.68.195:45758.service: Deactivated successfully. May 17 00:22:17.217178 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:22:17.220678 systemd-logind[1445]: Removed session 13. May 17 00:22:18.010899 kubelet[2458]: E0517 00:22:18.010841 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:22.227604 systemd[1]: Started sshd@13-64.23.130.50:22-139.178.68.195:45764.service - OpenSSH per-connection server daemon (139.178.68.195:45764). May 17 00:22:22.399335 sshd[5548]: Accepted publickey for core from 139.178.68.195 port 45764 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:22.404347 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:22.412997 systemd-logind[1445]: New session 14 of user core. May 17 00:22:22.421332 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:22:23.020835 kubelet[2458]: E0517 00:22:23.016988 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:22:23.249225 sshd[5548]: pam_unix(sshd:session): session closed for user core May 17 00:22:23.258014 systemd[1]: sshd@13-64.23.130.50:22-139.178.68.195:45764.service: Deactivated successfully. May 17 00:22:23.262076 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:22:23.267192 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. May 17 00:22:23.269801 systemd-logind[1445]: Removed session 14. May 17 00:22:23.979071 kubelet[2458]: E0517 00:22:23.978999 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:27.974793 containerd[1459]: time="2025-05-17T00:22:27.974385806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:28.265696 systemd[1]: Started sshd@14-64.23.130.50:22-139.178.68.195:54594.service - OpenSSH per-connection server daemon (139.178.68.195:54594). May 17 00:22:28.275997 containerd[1459]: time="2025-05-17T00:22:28.275710136Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:28.281558 containerd[1459]: time="2025-05-17T00:22:28.279730523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:28.281558 containerd[1459]: time="2025-05-17T00:22:28.279855761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:28.285144 kubelet[2458]: E0517 00:22:28.284829 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:28.285144 kubelet[2458]: E0517 00:22:28.284973 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:28.289987 kubelet[2458]: E0517 00:22:28.287936 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7f28aac9d54108bed1a27017a72b7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:28.294120 containerd[1459]: time="2025-05-17T00:22:28.293801084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:28.394615 sshd[5561]: Accepted publickey for core from 139.178.68.195 port 54594 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:28.400725 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:28.420449 systemd-logind[1445]: New session 15 of user core. May 17 00:22:28.427851 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:22:28.545203 containerd[1459]: time="2025-05-17T00:22:28.544996857Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:28.546927 containerd[1459]: time="2025-05-17T00:22:28.546792726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:28.548708 containerd[1459]: time="2025-05-17T00:22:28.546825008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:28.549880 kubelet[2458]: E0517 00:22:28.549597 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:28.549880 kubelet[2458]: E0517 00:22:28.549700 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:28.553915 kubelet[2458]: E0517 00:22:28.553753 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pt66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-599954c495-65fz2_calico-system(e57b1bf0-91c0-4dac-b141-813710ca490d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:28.555705 kubelet[2458]: E0517 00:22:28.555090 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:22:29.014081 kubelet[2458]: E0517 00:22:29.008319 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:29.391809 sshd[5561]: pam_unix(sshd:session): session closed for user core May 17 00:22:29.405324 systemd[1]: sshd@14-64.23.130.50:22-139.178.68.195:54594.service: Deactivated successfully. May 17 00:22:29.410208 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:22:29.413179 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. May 17 00:22:29.419760 systemd-logind[1445]: Removed session 15. May 17 00:22:34.412061 systemd[1]: Started sshd@15-64.23.130.50:22-139.178.68.195:41478.service - OpenSSH per-connection server daemon (139.178.68.195:41478). May 17 00:22:34.554576 sshd[5603]: Accepted publickey for core from 139.178.68.195 port 41478 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:34.557574 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:34.566942 systemd-logind[1445]: New session 16 of user core. May 17 00:22:34.574990 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:22:34.949293 sshd[5603]: pam_unix(sshd:session): session closed for user core May 17 00:22:34.960636 systemd[1]: sshd@15-64.23.130.50:22-139.178.68.195:41478.service: Deactivated successfully. May 17 00:22:34.965360 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:22:34.968359 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. May 17 00:22:34.979571 systemd[1]: Started sshd@16-64.23.130.50:22-139.178.68.195:41490.service - OpenSSH per-connection server daemon (139.178.68.195:41490). May 17 00:22:34.982683 systemd-logind[1445]: Removed session 16. May 17 00:22:35.046410 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 41490 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:35.050394 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:35.057585 systemd-logind[1445]: New session 17 of user core. May 17 00:22:35.063838 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:22:35.449155 sshd[5615]: pam_unix(sshd:session): session closed for user core May 17 00:22:35.461705 systemd[1]: sshd@16-64.23.130.50:22-139.178.68.195:41490.service: Deactivated successfully. May 17 00:22:35.466143 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:22:35.468925 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. May 17 00:22:35.476005 systemd[1]: Started sshd@17-64.23.130.50:22-139.178.68.195:41494.service - OpenSSH per-connection server daemon (139.178.68.195:41494). May 17 00:22:35.478456 systemd-logind[1445]: Removed session 17. May 17 00:22:35.560017 sshd[5626]: Accepted publickey for core from 139.178.68.195 port 41494 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:35.562781 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:35.572309 systemd-logind[1445]: New session 18 of user core. May 17 00:22:35.576829 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:22:36.747254 sshd[5626]: pam_unix(sshd:session): session closed for user core May 17 00:22:36.762796 systemd[1]: sshd@17-64.23.130.50:22-139.178.68.195:41494.service: Deactivated successfully. May 17 00:22:36.767878 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:22:36.772424 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. May 17 00:22:36.778992 systemd[1]: Started sshd@18-64.23.130.50:22-139.178.68.195:41510.service - OpenSSH per-connection server daemon (139.178.68.195:41510). May 17 00:22:36.794160 systemd-logind[1445]: Removed session 18. May 17 00:22:36.876028 sshd[5642]: Accepted publickey for core from 139.178.68.195 port 41510 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:36.880055 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:36.888429 systemd-logind[1445]: New session 19 of user core. May 17 00:22:36.895010 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:22:37.117749 containerd[1459]: time="2025-05-17T00:22:37.117465024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:37.362192 containerd[1459]: time="2025-05-17T00:22:37.361949779Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:37.363057 containerd[1459]: time="2025-05-17T00:22:37.362978437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:37.364548 containerd[1459]: time="2025-05-17T00:22:37.362955619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:37.394933 kubelet[2458]: E0517 00:22:37.372933 2458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:37.394933 kubelet[2458]: E0517 00:22:37.394668 2458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:37.399072 kubelet[2458]: E0517 00:22:37.398029 2458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc6t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-l9z28_calico-system(c8e662ac-310d-4631-a4cc-a86f8c336b26): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:37.417715 kubelet[2458]: E0517 00:22:37.417507 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:22:38.084678 sshd[5642]: pam_unix(sshd:session): session closed for user core May 17 00:22:38.109245 systemd[1]: Started sshd@19-64.23.130.50:22-139.178.68.195:41522.service - OpenSSH per-connection server daemon (139.178.68.195:41522). May 17 00:22:38.110099 systemd[1]: sshd@18-64.23.130.50:22-139.178.68.195:41510.service: Deactivated successfully. May 17 00:22:38.116144 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:22:38.118688 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. May 17 00:22:38.123710 systemd-logind[1445]: Removed session 19. May 17 00:22:38.252233 sshd[5653]: Accepted publickey for core from 139.178.68.195 port 41522 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:38.255799 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:38.263784 systemd-logind[1445]: New session 20 of user core. May 17 00:22:38.272034 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:22:38.469059 sshd[5653]: pam_unix(sshd:session): session closed for user core May 17 00:22:38.473962 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. May 17 00:22:38.477337 systemd[1]: sshd@19-64.23.130.50:22-139.178.68.195:41522.service: Deactivated successfully. May 17 00:22:38.481307 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:22:38.483121 systemd-logind[1445]: Removed session 20. May 17 00:22:38.982570 kubelet[2458]: E0517 00:22:38.980876 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:22:39.972683 kubelet[2458]: E0517 00:22:39.972556 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:43.497168 systemd[1]: Started sshd@20-64.23.130.50:22-139.178.68.195:41528.service - OpenSSH per-connection server daemon (139.178.68.195:41528). May 17 00:22:43.639591 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 41528 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:43.642691 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:43.648726 systemd-logind[1445]: New session 21 of user core. May 17 00:22:43.654878 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:22:44.388863 sshd[5691]: pam_unix(sshd:session): session closed for user core May 17 00:22:44.394841 systemd[1]: sshd@20-64.23.130.50:22-139.178.68.195:41528.service: Deactivated successfully. May 17 00:22:44.399642 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:22:44.401024 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. May 17 00:22:44.403723 systemd-logind[1445]: Removed session 21. May 17 00:22:49.428677 systemd[1]: Started sshd@21-64.23.130.50:22-139.178.68.195:34886.service - OpenSSH per-connection server daemon (139.178.68.195:34886). May 17 00:22:49.547606 sshd[5725]: Accepted publickey for core from 139.178.68.195 port 34886 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:49.551574 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:49.562704 systemd-logind[1445]: New session 22 of user core. May 17 00:22:49.568905 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:22:49.895245 sshd[5725]: pam_unix(sshd:session): session closed for user core May 17 00:22:49.907115 systemd[1]: sshd@21-64.23.130.50:22-139.178.68.195:34886.service: Deactivated successfully. May 17 00:22:49.913763 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:22:49.916543 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. May 17 00:22:49.921373 systemd-logind[1445]: Removed session 22. May 17 00:22:49.982986 kubelet[2458]: E0517 00:22:49.982078 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-l9z28" podUID="c8e662ac-310d-4631-a4cc-a86f8c336b26" May 17 00:22:52.978844 kubelet[2458]: E0517 00:22:52.978772 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-599954c495-65fz2" podUID="e57b1bf0-91c0-4dac-b141-813710ca490d" May 17 00:22:54.919907 systemd[1]: Started sshd@22-64.23.130.50:22-139.178.68.195:41602.service - OpenSSH per-connection server daemon (139.178.68.195:41602). May 17 00:22:55.048158 sshd[5738]: Accepted publickey for core from 139.178.68.195 port 41602 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:55.053312 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:55.067449 systemd-logind[1445]: New session 23 of user core. May 17 00:22:55.072046 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:22:55.504047 sshd[5738]: pam_unix(sshd:session): session closed for user core May 17 00:22:55.512201 systemd[1]: sshd@22-64.23.130.50:22-139.178.68.195:41602.service: Deactivated successfully. May 17 00:22:55.512926 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. May 17 00:22:55.521010 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:22:55.524424 systemd-logind[1445]: Removed session 23. May 17 00:22:56.974312 kubelet[2458]: E0517 00:22:56.973847 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"