May 17 00:22:18.913497 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:22:18.913530 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:18.913549 kernel: BIOS-provided physical RAM map: May 17 00:22:18.913558 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:22:18.913568 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:22:18.913577 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:22:18.913588 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:22:18.913598 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:22:18.913608 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:22:18.913622 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:22:18.913633 kernel: NX (Execute Disable) protection: active May 17 00:22:18.913645 kernel: APIC: Static calls initialized May 17 00:22:18.913662 kernel: SMBIOS 2.8 present. May 17 00:22:18.913670 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:22:18.913679 kernel: Hypervisor detected: KVM May 17 00:22:18.913690 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:22:18.913701 kernel: kvm-clock: using sched offset of 3082697521 cycles May 17 00:22:18.913709 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:22:18.913717 kernel: tsc: Detected 2494.134 MHz processor May 17 00:22:18.913725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:22:18.913733 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:22:18.913741 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:22:18.913749 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:22:18.913757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:22:18.913768 kernel: ACPI: Early table checksum verification disabled May 17 00:22:18.913776 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:22:18.913784 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913791 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913799 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913806 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:22:18.913814 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913822 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913829 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913840 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:22:18.913848 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:22:18.913856 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:22:18.913863 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:22:18.913871 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:22:18.913879 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:22:18.913887 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:22:18.913904 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:22:18.913918 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:22:18.913926 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:22:18.913935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:22:18.913943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:22:18.913954 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:22:18.913963 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:22:18.916133 kernel: Zone ranges: May 17 00:22:18.916144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:22:18.916153 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:22:18.916161 kernel: Normal empty May 17 00:22:18.916170 kernel: Movable zone start for each node May 17 00:22:18.916178 kernel: Early memory node ranges May 17 00:22:18.916187 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:22:18.916195 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:22:18.916204 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:22:18.916215 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:22:18.916224 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:22:18.916239 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:22:18.916248 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:22:18.916256 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:22:18.916265 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:22:18.916273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:22:18.916281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:22:18.916290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:22:18.916301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:22:18.916309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:22:18.916318 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:22:18.916326 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:22:18.916335 kernel: TSC deadline timer available May 17 00:22:18.916343 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:22:18.916351 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:22:18.916364 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:22:18.916382 kernel: Booting paravirtualized kernel on KVM May 17 00:22:18.916396 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:22:18.916408 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:22:18.916416 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:22:18.916425 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:22:18.916433 kernel: pcpu-alloc: [0] 0 1 May 17 00:22:18.916441 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:22:18.916460 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:18.916469 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:22:18.916477 kernel: random: crng init done May 17 00:22:18.916489 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:22:18.916498 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:22:18.916506 kernel: Fallback order for Node 0: 0 May 17 00:22:18.916515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:22:18.916523 kernel: Policy zone: DMA32 May 17 00:22:18.916536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:22:18.916545 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125148K reserved, 0K cma-reserved) May 17 00:22:18.916554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:22:18.916566 kernel: Kernel/User page tables isolation: enabled May 17 00:22:18.916577 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:22:18.916586 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:22:18.916595 kernel: Dynamic Preempt: voluntary May 17 00:22:18.916603 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:22:18.916612 kernel: rcu: RCU event tracing is enabled. May 17 00:22:18.916621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:22:18.916630 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:22:18.916638 kernel: Rude variant of Tasks RCU enabled. May 17 00:22:18.916646 kernel: Tracing variant of Tasks RCU enabled. May 17 00:22:18.916658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:22:18.916679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:22:18.916688 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:22:18.916696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:22:18.916707 kernel: Console: colour VGA+ 80x25 May 17 00:22:18.916716 kernel: printk: console [tty0] enabled May 17 00:22:18.916724 kernel: printk: console [ttyS0] enabled May 17 00:22:18.916733 kernel: ACPI: Core revision 20230628 May 17 00:22:18.916741 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:22:18.916753 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:22:18.916761 kernel: x2apic enabled May 17 00:22:18.916769 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:22:18.916778 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:22:18.916786 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 17 00:22:18.916794 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) May 17 00:22:18.916803 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:22:18.916811 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:22:18.916837 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:22:18.916849 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:22:18.916862 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:22:18.916878 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:22:18.916893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:22:18.916904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:22:18.916913 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:22:18.916921 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:22:18.916933 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:22:18.916946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:22:18.916959 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:22:18.916988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:22:18.917004 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:22:18.917017 kernel: Freeing SMP alternatives memory: 32K May 17 00:22:18.917025 kernel: pid_max: default: 32768 minimum: 301 May 17 00:22:18.917035 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:22:18.917051 kernel: landlock: Up and running. May 17 00:22:18.917072 kernel: SELinux: Initializing. May 17 00:22:18.917083 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:22:18.917092 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:22:18.917101 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:22:18.917109 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:22:18.917118 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:22:18.917127 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:22:18.917136 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:22:18.917145 kernel: signal: max sigframe size: 1776 May 17 00:22:18.917157 kernel: rcu: Hierarchical SRCU implementation. May 17 00:22:18.917166 kernel: rcu: Max phase no-delay instances is 400. May 17 00:22:18.917175 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:22:18.917184 kernel: smp: Bringing up secondary CPUs ... May 17 00:22:18.917193 kernel: smpboot: x86: Booting SMP configuration: May 17 00:22:18.917202 kernel: .... node #0, CPUs: #1 May 17 00:22:18.917210 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:22:18.917219 kernel: smpboot: Max logical packages: 1 May 17 00:22:18.917234 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) May 17 00:22:18.917251 kernel: devtmpfs: initialized May 17 00:22:18.917264 kernel: x86/mm: Memory block size: 128MB May 17 00:22:18.917277 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:22:18.917294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:22:18.917309 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:22:18.917326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:22:18.917342 kernel: audit: initializing netlink subsys (disabled) May 17 00:22:18.917352 kernel: audit: type=2000 audit(1747441338.408:1): state=initialized audit_enabled=0 res=1 May 17 00:22:18.917361 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:22:18.917375 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:22:18.917384 kernel: cpuidle: using governor menu May 17 00:22:18.917393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:22:18.917402 kernel: dca service started, version 1.12.1 May 17 00:22:18.917411 kernel: PCI: Using configuration type 1 for base access May 17 00:22:18.917420 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:22:18.917429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:22:18.917437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:22:18.917446 kernel: ACPI: Added _OSI(Module Device) May 17 00:22:18.917458 kernel: ACPI: Added _OSI(Processor Device) May 17 00:22:18.917466 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:22:18.917475 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:22:18.917484 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:22:18.917493 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:22:18.917502 kernel: ACPI: Interpreter enabled May 17 00:22:18.917510 kernel: ACPI: PM: (supports S0 S5) May 17 00:22:18.917519 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:22:18.917528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:22:18.917540 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:22:18.917549 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:22:18.917558 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:22:18.917786 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:22:18.917893 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:22:18.920154 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:22:18.920194 kernel: acpiphp: Slot [3] registered May 17 00:22:18.920217 kernel: acpiphp: Slot [4] registered May 17 00:22:18.920231 kernel: acpiphp: Slot [5] registered May 17 00:22:18.920244 kernel: acpiphp: Slot [6] registered May 17 00:22:18.920257 kernel: acpiphp: Slot [7] registered May 17 00:22:18.920271 kernel: acpiphp: Slot [8] registered May 17 00:22:18.920284 kernel: acpiphp: Slot [9] registered May 17 00:22:18.920297 kernel: acpiphp: Slot [10] registered May 17 00:22:18.920310 kernel: acpiphp: Slot [11] registered May 17 00:22:18.920323 kernel: acpiphp: Slot [12] registered May 17 00:22:18.920335 kernel: acpiphp: Slot [13] registered May 17 00:22:18.920356 kernel: acpiphp: Slot [14] registered May 17 00:22:18.920367 kernel: acpiphp: Slot [15] registered May 17 00:22:18.920380 kernel: acpiphp: Slot [16] registered May 17 00:22:18.920392 kernel: acpiphp: Slot [17] registered May 17 00:22:18.920404 kernel: acpiphp: Slot [18] registered May 17 00:22:18.920416 kernel: acpiphp: Slot [19] registered May 17 00:22:18.920431 kernel: acpiphp: Slot [20] registered May 17 00:22:18.920444 kernel: acpiphp: Slot [21] registered May 17 00:22:18.920456 kernel: acpiphp: Slot [22] registered May 17 00:22:18.920473 kernel: acpiphp: Slot [23] registered May 17 00:22:18.920485 kernel: acpiphp: Slot [24] registered May 17 00:22:18.920497 kernel: acpiphp: Slot [25] registered May 17 00:22:18.920512 kernel: acpiphp: Slot [26] registered May 17 00:22:18.920524 kernel: acpiphp: Slot [27] registered May 17 00:22:18.920537 kernel: acpiphp: Slot [28] registered May 17 00:22:18.920549 kernel: acpiphp: Slot [29] registered May 17 00:22:18.920562 kernel: acpiphp: Slot [30] registered May 17 00:22:18.920575 kernel: acpiphp: Slot [31] registered May 17 00:22:18.920589 kernel: PCI host bridge to bus 0000:00 May 17 00:22:18.920787 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:22:18.920886 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:22:18.921007 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:22:18.921146 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:22:18.921282 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:22:18.921419 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:22:18.921603 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:22:18.921722 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:22:18.921840 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:22:18.921937 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:22:18.923132 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:22:18.923289 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:22:18.923442 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:22:18.923597 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:22:18.923766 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:22:18.923907 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:22:18.924184 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:22:18.924334 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:22:18.924475 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:22:18.924656 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:22:18.924801 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:22:18.924942 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:22:18.925097 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:22:18.925238 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:22:18.925367 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:22:18.925523 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:22:18.925673 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:22:18.925812 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:22:18.925949 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:22:18.926202 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:22:18.926353 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:22:18.926497 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:22:18.926639 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:22:18.926828 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:22:18.926996 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:22:18.927185 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:22:18.927363 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:22:18.927536 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:22:18.927658 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:22:18.927803 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:22:18.927938 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:22:18.928160 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:22:18.928261 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:22:18.928356 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:22:18.928449 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:22:18.928558 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:22:18.928665 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:22:18.928793 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:22:18.928807 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:22:18.928817 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:22:18.928826 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:22:18.928836 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:22:18.928851 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:22:18.928865 kernel: iommu: Default domain type: Translated May 17 00:22:18.928884 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:22:18.928893 kernel: PCI: Using ACPI for IRQ routing May 17 00:22:18.928902 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:22:18.928911 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:22:18.928920 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:22:18.929071 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:22:18.929231 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:22:18.929375 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:22:18.929403 kernel: vgaarb: loaded May 17 00:22:18.929420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:22:18.929437 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:22:18.929451 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:22:18.929466 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:22:18.929475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:22:18.929484 kernel: pnp: PnP ACPI init May 17 00:22:18.929510 kernel: pnp: PnP ACPI: found 4 devices May 17 00:22:18.929530 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:22:18.929544 kernel: NET: Registered PF_INET protocol family May 17 00:22:18.929554 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:22:18.929563 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:22:18.929572 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:22:18.929581 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:22:18.929590 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:22:18.929600 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:22:18.929614 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:22:18.929623 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:22:18.929636 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:22:18.929645 kernel: NET: Registered PF_XDP protocol family May 17 00:22:18.929756 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:22:18.929845 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:22:18.929937 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:22:18.930142 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:22:18.930299 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:22:18.930461 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:22:18.930587 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:22:18.930602 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:22:18.930703 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 27926 usecs May 17 00:22:18.930716 kernel: PCI: CLS 0 bytes, default 64 May 17 00:22:18.930726 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:22:18.930735 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 17 00:22:18.930745 kernel: Initialise system trusted keyrings May 17 00:22:18.930754 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:22:18.930768 kernel: Key type asymmetric registered May 17 00:22:18.930777 kernel: Asymmetric key parser 'x509' registered May 17 00:22:18.930786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:22:18.930795 kernel: io scheduler mq-deadline registered May 17 00:22:18.930804 kernel: io scheduler kyber registered May 17 00:22:18.930813 kernel: io scheduler bfq registered May 17 00:22:18.930822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:22:18.930832 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:22:18.930840 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:22:18.930849 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:22:18.930861 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:22:18.930870 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:22:18.930879 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:22:18.930888 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:22:18.930897 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:22:18.931065 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:22:18.931181 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:22:18.931194 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 17 00:22:18.931319 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:22:18 UTC (1747441338) May 17 00:22:18.931456 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:22:18.931480 kernel: intel_pstate: CPU model not supported May 17 00:22:18.931497 kernel: NET: Registered PF_INET6 protocol family May 17 00:22:18.931513 kernel: Segment Routing with IPv6 May 17 00:22:18.931534 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:22:18.931599 kernel: NET: Registered PF_PACKET protocol family May 17 00:22:18.931608 kernel: Key type dns_resolver registered May 17 00:22:18.931622 kernel: IPI shorthand broadcast: enabled May 17 00:22:18.931631 kernel: sched_clock: Marking stable (799002845, 83402666)->(974026328, -91620817) May 17 00:22:18.931640 kernel: registered taskstats version 1 May 17 00:22:18.931650 kernel: Loading compiled-in X.509 certificates May 17 00:22:18.931659 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:22:18.931668 kernel: Key type .fscrypt registered May 17 00:22:18.931677 kernel: Key type fscrypt-provisioning registered May 17 00:22:18.931686 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:22:18.931696 kernel: ima: Allocated hash algorithm: sha1 May 17 00:22:18.931707 kernel: ima: No architecture policies found May 17 00:22:18.931716 kernel: clk: Disabling unused clocks May 17 00:22:18.931726 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:22:18.931735 kernel: Write protecting the kernel read-only data: 36864k May 17 00:22:18.931744 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:22:18.931772 kernel: Run /init as init process May 17 00:22:18.931785 kernel: with arguments: May 17 00:22:18.931794 kernel: /init May 17 00:22:18.931803 kernel: with environment: May 17 00:22:18.931815 kernel: HOME=/ May 17 00:22:18.931824 kernel: TERM=linux May 17 00:22:18.931834 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:22:18.931846 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:22:18.931859 systemd[1]: Detected virtualization kvm. May 17 00:22:18.931872 systemd[1]: Detected architecture x86-64. May 17 00:22:18.931883 systemd[1]: Running in initrd. May 17 00:22:18.931898 systemd[1]: No hostname configured, using default hostname. May 17 00:22:18.931910 systemd[1]: Hostname set to . May 17 00:22:18.931920 systemd[1]: Initializing machine ID from VM UUID. May 17 00:22:18.931930 systemd[1]: Queued start job for default target initrd.target. May 17 00:22:18.931940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:18.931950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:18.931961 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:22:18.932034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:22:18.932044 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:22:18.932058 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:22:18.932069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:22:18.932080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:22:18.932090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:18.932102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:18.932117 systemd[1]: Reached target paths.target - Path Units. May 17 00:22:18.932135 systemd[1]: Reached target slices.target - Slice Units. May 17 00:22:18.932148 systemd[1]: Reached target swap.target - Swaps. May 17 00:22:18.932162 systemd[1]: Reached target timers.target - Timer Units. May 17 00:22:18.932181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:18.932196 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:18.932206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:22:18.932220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:22:18.932231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:18.932245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:22:18.932261 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:18.932279 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:22:18.932291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:22:18.932301 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:22:18.932311 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:22:18.932324 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:22:18.932338 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:22:18.932356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:22:18.932373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:18.932387 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:22:18.932402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:18.932417 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:22:18.932484 systemd-journald[183]: Collecting audit messages is disabled. May 17 00:22:18.932519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:22:18.932542 systemd-journald[183]: Journal started May 17 00:22:18.932570 systemd-journald[183]: Runtime Journal (/run/log/journal/7d16c18a3afd4076924331f45d3117b0) is 4.9M, max 39.3M, 34.4M free. May 17 00:22:18.927325 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:22:18.971551 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:22:18.971591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:22:18.972625 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:22:18.973800 kernel: Bridge firewalling registered May 17 00:22:18.973841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:22:18.974824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:18.979259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:18.989203 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:18.992239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:22:18.993323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:22:19.003169 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:22:19.021952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:19.023293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:19.024497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:19.025133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:19.032261 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:22:19.036238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:22:19.056362 dracut-cmdline[215]: dracut-dracut-053 May 17 00:22:19.064504 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:19.082704 systemd-resolved[217]: Positive Trust Anchors: May 17 00:22:19.082720 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:22:19.082755 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:22:19.085585 systemd-resolved[217]: Defaulting to hostname 'linux'. May 17 00:22:19.086783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:22:19.089225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:19.174015 kernel: SCSI subsystem initialized May 17 00:22:19.183996 kernel: Loading iSCSI transport class v2.0-870. May 17 00:22:19.196021 kernel: iscsi: registered transport (tcp) May 17 00:22:19.218002 kernel: iscsi: registered transport (qla4xxx) May 17 00:22:19.218073 kernel: QLogic iSCSI HBA Driver May 17 00:22:19.273010 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:22:19.281262 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:22:19.310182 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:22:19.310276 kernel: device-mapper: uevent: version 1.0.3 May 17 00:22:19.311633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:22:19.360036 kernel: raid6: avx2x4 gen() 15273 MB/s May 17 00:22:19.377029 kernel: raid6: avx2x2 gen() 15594 MB/s May 17 00:22:19.394500 kernel: raid6: avx2x1 gen() 12298 MB/s May 17 00:22:19.394585 kernel: raid6: using algorithm avx2x2 gen() 15594 MB/s May 17 00:22:19.412147 kernel: raid6: .... xor() 18763 MB/s, rmw enabled May 17 00:22:19.412225 kernel: raid6: using avx2x2 recovery algorithm May 17 00:22:19.437006 kernel: xor: automatically using best checksumming function avx May 17 00:22:19.632005 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:22:19.644957 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:19.651226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:19.675784 systemd-udevd[400]: Using default interface naming scheme 'v255'. May 17 00:22:19.681299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:19.689276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:22:19.710119 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 17 00:22:19.747264 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:19.753275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:22:19.818745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:19.825201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:22:19.842411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:22:19.845816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:19.847233 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:19.848272 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:22:19.855231 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:22:19.878037 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:19.890022 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 17 00:22:19.895335 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:22:19.908102 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:22:19.908160 kernel: GPT:9289727 != 125829119 May 17 00:22:19.908173 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:22:19.909629 kernel: GPT:9289727 != 125829119 May 17 00:22:19.909664 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:22:19.909677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:22:19.914005 kernel: scsi host0: Virtio SCSI HBA May 17 00:22:19.925784 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 17 00:22:19.929781 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 17 00:22:19.968170 kernel: ACPI: bus type USB registered May 17 00:22:19.968237 kernel: usbcore: registered new interface driver usbfs May 17 00:22:19.969116 kernel: usbcore: registered new interface driver hub May 17 00:22:19.972016 kernel: usbcore: registered new device driver usb May 17 00:22:19.982015 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:22:19.989026 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (445) May 17 00:22:19.992024 kernel: libata version 3.00 loaded. May 17 00:22:20.000271 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:22:20.001448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:20.001568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:20.003712 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:20.006022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:20.006229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:20.007491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:20.011990 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:22:20.012299 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) May 17 00:22:20.015326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:20.022891 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:22:20.037978 kernel: scsi host1: ata_piix May 17 00:22:20.042044 kernel: scsi host2: ata_piix May 17 00:22:20.042406 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:22:20.042423 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:22:20.042436 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:22:20.054988 kernel: AES CTR mode by8 optimization enabled May 17 00:22:20.055799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:22:20.057217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:22:20.063685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:22:20.097221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:22:20.097876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:20.102226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:20.105546 disk-uuid[530]: Primary Header is updated. May 17 00:22:20.105546 disk-uuid[530]: Secondary Entries is updated. May 17 00:22:20.105546 disk-uuid[530]: Secondary Header is updated. May 17 00:22:20.117010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:22:20.133991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:22:20.138319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:20.252480 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:22:20.252753 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:22:20.252876 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:22:20.255989 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 17 00:22:20.258000 kernel: hub 1-0:1.0: USB hub found May 17 00:22:20.262057 kernel: hub 1-0:1.0: 2 ports detected May 17 00:22:21.128045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:22:21.128387 disk-uuid[532]: The operation has completed successfully. May 17 00:22:21.164710 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:22:21.164842 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:22:21.183277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:22:21.188647 sh[561]: Success May 17 00:22:21.203230 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:22:21.268190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:22:21.286110 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:22:21.287371 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:22:21.320025 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:22:21.320109 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:21.320126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:22:21.320142 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:22:21.321357 kernel: BTRFS info (device dm-0): using free space tree May 17 00:22:21.331566 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:22:21.332907 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:22:21.348332 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:22:21.351458 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:22:21.363155 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:21.363228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:21.363249 kernel: BTRFS info (device vda6): using free space tree May 17 00:22:21.369546 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:22:21.381990 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:21.382863 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:22:21.387564 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:22:21.392193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:22:21.547488 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:21.556912 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:22:21.559380 ignition[641]: Ignition 2.19.0 May 17 00:22:21.560099 ignition[641]: Stage: fetch-offline May 17 00:22:21.560177 ignition[641]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:21.560195 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:21.560397 ignition[641]: parsed url from cmdline: "" May 17 00:22:21.560404 ignition[641]: no config URL provided May 17 00:22:21.560414 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:22:21.560430 ignition[641]: no config at "/usr/lib/ignition/user.ign" May 17 00:22:21.560441 ignition[641]: failed to fetch config: resource requires networking May 17 00:22:21.563275 ignition[641]: Ignition finished successfully May 17 00:22:21.566939 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:21.603997 systemd-networkd[750]: lo: Link UP May 17 00:22:21.604010 systemd-networkd[750]: lo: Gained carrier May 17 00:22:21.607369 systemd-networkd[750]: Enumeration completed May 17 00:22:21.607804 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:22:21.607940 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:22:21.607946 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:22:21.608322 systemd[1]: Reached target network.target - Network. May 17 00:22:21.609321 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:22:21.609327 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:21.610347 systemd-networkd[750]: eth0: Link UP May 17 00:22:21.610353 systemd-networkd[750]: eth0: Gained carrier May 17 00:22:21.610366 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:22:21.613489 systemd-networkd[750]: eth1: Link UP May 17 00:22:21.613495 systemd-networkd[750]: eth1: Gained carrier May 17 00:22:21.613511 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:22:21.616184 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:22:21.626077 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.24/20 acquired from 169.254.169.253 May 17 00:22:21.628078 systemd-networkd[750]: eth0: DHCPv4 address 134.199.214.88/20, gateway 134.199.208.1 acquired from 169.254.169.253 May 17 00:22:21.643615 ignition[753]: Ignition 2.19.0 May 17 00:22:21.643631 ignition[753]: Stage: fetch May 17 00:22:21.643924 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:21.643942 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:21.644136 ignition[753]: parsed url from cmdline: "" May 17 00:22:21.644143 ignition[753]: no config URL provided May 17 00:22:21.644153 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:22:21.644168 ignition[753]: no config at "/usr/lib/ignition/user.ign" May 17 00:22:21.644194 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:22:21.662285 ignition[753]: GET result: OK May 17 00:22:21.662452 ignition[753]: parsing config with SHA512: 2c9995bd10c8e253e2ff23727acaf2dfcda69547a8bfc3e88d295056686127b75b380ca051f14c68cef7594134039b70122693d4599e14a3689e1b19c5caeadf May 17 00:22:21.667419 unknown[753]: fetched base config from "system" May 17 00:22:21.667449 unknown[753]: fetched base config from "system" May 17 00:22:21.668151 ignition[753]: fetch: fetch complete May 17 00:22:21.667455 unknown[753]: fetched user config from "digitalocean" May 17 00:22:21.668165 ignition[753]: fetch: fetch passed May 17 00:22:21.668221 ignition[753]: Ignition finished successfully May 17 00:22:21.671303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:22:21.678329 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:22:21.700894 ignition[760]: Ignition 2.19.0 May 17 00:22:21.700905 ignition[760]: Stage: kargs May 17 00:22:21.701131 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:21.701142 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:21.702128 ignition[760]: kargs: kargs passed May 17 00:22:21.703346 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:22:21.702250 ignition[760]: Ignition finished successfully May 17 00:22:21.711215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:22:21.726452 ignition[766]: Ignition 2.19.0 May 17 00:22:21.726464 ignition[766]: Stage: disks May 17 00:22:21.726653 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:21.726664 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:21.728393 ignition[766]: disks: disks passed May 17 00:22:21.728497 ignition[766]: Ignition finished successfully May 17 00:22:21.729911 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:22:21.733948 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:22:21.734600 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:22:21.735481 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:22:21.736318 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:22:21.737197 systemd[1]: Reached target basic.target - Basic System. May 17 00:22:21.744253 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:22:21.764661 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:22:21.766845 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:22:21.771159 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:22:21.882011 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:22:21.883063 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:22:21.884246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:22:21.896159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:21.899200 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:22:21.902379 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... May 17 00:22:21.908997 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (782) May 17 00:22:21.912889 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:22:21.919265 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:21.919294 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:21.919309 kernel: BTRFS info (device vda6): using free space tree May 17 00:22:21.918088 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:22:21.921690 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:22:21.918130 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:21.926491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:21.928171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:22:21.936372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:22:22.014403 coreos-metadata[784]: May 17 00:22:22.014 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:22:22.015855 coreos-metadata[785]: May 17 00:22:22.015 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:22:22.019501 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:22:22.026789 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory May 17 00:22:22.030573 coreos-metadata[784]: May 17 00:22:22.030 INFO Fetch successful May 17 00:22:22.034044 coreos-metadata[785]: May 17 00:22:22.034 INFO Fetch successful May 17 00:22:22.035664 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:22:22.041097 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:22:22.041222 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. May 17 00:22:22.044090 coreos-metadata[785]: May 17 00:22:22.043 INFO wrote hostname ci-4081.3.3-n-2d1cdc348f to /sysroot/etc/hostname May 17 00:22:22.045775 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:22.048430 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:22:22.165205 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:22:22.173167 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:22:22.189160 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:22:22.198998 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:22.217361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:22:22.228050 ignition[903]: INFO : Ignition 2.19.0 May 17 00:22:22.228050 ignition[903]: INFO : Stage: mount May 17 00:22:22.229260 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:22.229260 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:22.231471 ignition[903]: INFO : mount: mount passed May 17 00:22:22.231471 ignition[903]: INFO : Ignition finished successfully May 17 00:22:22.231382 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:22:22.240166 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:22:22.317838 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:22:22.324248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:22.345025 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (914) May 17 00:22:22.347021 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:22.347100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:22.348312 kernel: BTRFS info (device vda6): using free space tree May 17 00:22:22.352030 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:22:22.354757 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:22.380785 ignition[930]: INFO : Ignition 2.19.0 May 17 00:22:22.381560 ignition[930]: INFO : Stage: files May 17 00:22:22.382344 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:22.382870 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:22.384647 ignition[930]: DEBUG : files: compiled without relabeling support, skipping May 17 00:22:22.386549 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:22:22.387408 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:22:22.392048 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:22:22.392964 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:22:22.392964 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:22:22.392664 unknown[930]: wrote ssh authorized keys file for user: core May 17 00:22:22.395164 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:22.395164 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:22.395164 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:22.395164 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:22:22.434307 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:22:22.555186 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:22.555186 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:22.556713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:22.561616 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:22:23.096260 systemd-networkd[750]: eth1: Gained IPv6LL May 17 00:22:23.160260 systemd-networkd[750]: eth0: Gained IPv6LL May 17 00:22:23.229242 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:22:23.573796 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:23.573796 ignition[930]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:22:23.576943 ignition[930]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:23.577925 ignition[930]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:23.577925 ignition[930]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:22:23.577925 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:23.577925 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:23.577925 ignition[930]: INFO : files: files passed May 17 00:22:23.577925 ignition[930]: INFO : Ignition finished successfully May 17 00:22:23.580346 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:22:23.589363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:22:23.593256 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:22:23.606817 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:22:23.606963 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:22:23.623167 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:23.623167 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:23.626964 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:23.629108 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:23.630834 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:22:23.636379 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:22:23.685850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:22:23.686058 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:22:23.687595 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:22:23.688450 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:22:23.689423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:22:23.699282 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:22:23.719190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:23.726408 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:22:23.749211 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:23.750027 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:23.750732 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:22:23.751713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:22:23.751913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:23.753054 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:22:23.754049 systemd[1]: Stopped target basic.target - Basic System. May 17 00:22:23.755236 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:22:23.755991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:23.756786 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:22:23.757749 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:22:23.758906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:23.759856 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:22:23.760670 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:22:23.761547 systemd[1]: Stopped target swap.target - Swaps. May 17 00:22:23.762379 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:22:23.762570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:23.763786 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:23.764814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:23.765714 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:22:23.765865 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:23.766672 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:22:23.766880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:22:23.768158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:22:23.768419 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:23.769428 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:22:23.769653 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:22:23.771044 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:22:23.771218 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:23.784438 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:22:23.791418 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:22:23.792659 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:22:23.792895 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:23.794530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:22:23.794736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:23.808220 ignition[984]: INFO : Ignition 2.19.0 May 17 00:22:23.811615 ignition[984]: INFO : Stage: umount May 17 00:22:23.811615 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:23.811615 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:22:23.810861 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:22:23.820670 ignition[984]: INFO : umount: umount passed May 17 00:22:23.820670 ignition[984]: INFO : Ignition finished successfully May 17 00:22:23.811029 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:22:23.821655 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:22:23.821817 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:22:23.822962 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:22:23.825045 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:22:23.825547 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:22:23.825604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:22:23.826251 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:22:23.826314 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:22:23.828185 systemd[1]: Stopped target network.target - Network. May 17 00:22:23.829152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:22:23.829238 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:23.829918 systemd[1]: Stopped target paths.target - Path Units. May 17 00:22:23.832640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:22:23.837110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:23.837707 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:22:23.840249 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:22:23.841176 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:22:23.841237 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:23.841823 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:22:23.841889 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:23.842627 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:22:23.842725 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:22:23.845539 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:22:23.845619 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:22:23.846496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:22:23.847779 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:22:23.850329 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:22:23.851021 systemd-networkd[750]: eth1: DHCPv6 lease lost May 17 00:22:23.853096 systemd-networkd[750]: eth0: DHCPv6 lease lost May 17 00:22:23.855252 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:22:23.855417 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:22:23.858447 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:22:23.858632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:22:23.861790 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:22:23.861928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:22:23.864799 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:22:23.864901 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:23.865527 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:22:23.865593 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:22:23.871172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:22:23.871593 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:22:23.871684 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:23.872222 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:22:23.872282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:23.872724 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:22:23.872771 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:22:23.876233 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:22:23.876301 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:23.877445 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:23.890002 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:22:23.890516 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:23.892439 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:22:23.892564 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:22:23.893618 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:22:23.893665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:23.894674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:22:23.894765 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:23.895761 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:22:23.895828 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:22:23.898492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:23.898584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:23.905274 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:22:23.905893 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:22:23.906056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:23.906932 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:22:23.907031 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:23.907522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:22:23.907580 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:23.911663 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:23.911740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:23.912808 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:22:23.913008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:22:23.916564 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:22:23.916729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:22:23.918422 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:22:23.925270 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:22:23.938540 systemd[1]: Switching root. May 17 00:22:23.976565 systemd-journald[183]: Journal stopped May 17 00:22:25.351858 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:22:25.353023 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:22:25.353077 kernel: SELinux: policy capability open_perms=1 May 17 00:22:25.353096 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:22:25.353114 kernel: SELinux: policy capability always_check_network=0 May 17 00:22:25.353133 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:22:25.353152 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:22:25.353182 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:22:25.353219 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:22:25.353242 kernel: audit: type=1403 audit(1747441344.266:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:22:25.353264 systemd[1]: Successfully loaded SELinux policy in 44.759ms. May 17 00:22:25.353292 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.119ms. May 17 00:22:25.353316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:22:25.353335 systemd[1]: Detected virtualization kvm. May 17 00:22:25.353352 systemd[1]: Detected architecture x86-64. May 17 00:22:25.353370 systemd[1]: Detected first boot. May 17 00:22:25.353396 systemd[1]: Hostname set to . May 17 00:22:25.353421 systemd[1]: Initializing machine ID from VM UUID. May 17 00:22:25.353441 zram_generator::config[1044]: No configuration found. May 17 00:22:25.353461 systemd[1]: Populated /etc with preset unit settings. May 17 00:22:25.353479 systemd[1]: Queued start job for default target multi-user.target. May 17 00:22:25.353506 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:22:25.353530 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:22:25.353549 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:22:25.353569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:22:25.353594 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:22:25.353613 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:22:25.353633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:22:25.353652 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:22:25.353673 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:22:25.353692 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:25.353708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:25.353727 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:22:25.353753 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:22:25.353772 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:22:25.353792 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:22:25.353811 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:22:25.353831 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:25.353850 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:22:25.353868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:25.353889 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:22:25.353914 systemd[1]: Reached target slices.target - Slice Units. May 17 00:22:25.353935 systemd[1]: Reached target swap.target - Swaps. May 17 00:22:25.353956 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:22:25.357160 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:22:25.357207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:22:25.357227 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:22:25.357249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:25.357267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:22:25.357296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:25.357314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:22:25.357333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:22:25.357372 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:22:25.357392 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:22:25.357420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:25.357440 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:22:25.357459 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:22:25.357476 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:22:25.357503 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:22:25.357522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:25.357543 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:22:25.357562 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:22:25.357580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:25.357599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:22:25.357617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:25.357636 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:22:25.357661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:25.357682 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:22:25.357702 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:22:25.357723 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:22:25.357743 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:22:25.357765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:22:25.357785 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:22:25.357805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:22:25.357832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:22:25.357854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:25.357873 kernel: fuse: init (API version 7.39) May 17 00:22:25.357893 kernel: loop: module loaded May 17 00:22:25.357911 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:22:25.357930 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:22:25.357949 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:22:25.357985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:22:25.358006 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:22:25.358031 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:22:25.358050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:25.358067 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:22:25.358087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:22:25.358106 kernel: ACPI: bus type drm_connector registered May 17 00:22:25.358124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:25.358143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:25.358183 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:22:25.358210 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:22:25.358229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:25.358248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:25.358267 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:22:25.358292 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:22:25.358360 systemd-journald[1132]: Collecting audit messages is disabled. May 17 00:22:25.358402 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:25.358424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:25.358442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:22:25.358461 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:22:25.358486 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:22:25.358504 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:22:25.358525 systemd-journald[1132]: Journal started May 17 00:22:25.358565 systemd-journald[1132]: Runtime Journal (/run/log/journal/7d16c18a3afd4076924331f45d3117b0) is 4.9M, max 39.3M, 34.4M free. May 17 00:22:25.364859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:22:25.381059 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:22:25.381164 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:22:25.400000 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:22:25.401991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:22:25.412001 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:22:25.415006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:22:25.428006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:22:25.441006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:22:25.450988 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:22:25.460551 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:22:25.462349 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:22:25.463126 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:22:25.464574 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:22:25.483522 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:25.510620 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:22:25.524071 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:22:25.535299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:22:25.539666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:25.549104 systemd-journald[1132]: Time spent on flushing to /var/log/journal/7d16c18a3afd4076924331f45d3117b0 is 19.292ms for 980 entries. May 17 00:22:25.549104 systemd-journald[1132]: System Journal (/var/log/journal/7d16c18a3afd4076924331f45d3117b0) is 8.0M, max 195.6M, 187.6M free. May 17 00:22:25.576417 systemd-journald[1132]: Received client request to flush runtime journal. May 17 00:22:25.578713 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:22:25.586103 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 17 00:22:25.586134 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 17 00:22:25.588614 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:22:25.603170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:25.610366 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:22:25.656666 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:22:25.668254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:22:25.693621 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 17 00:22:25.693653 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 17 00:22:25.702708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:26.315613 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:22:26.325317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:26.354779 systemd-udevd[1215]: Using default interface naming scheme 'v255'. May 17 00:22:26.381601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:26.391980 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:22:26.421267 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:22:26.450567 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 17 00:22:26.507016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1229) May 17 00:22:26.518031 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:22:26.527852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:26.528072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:26.537316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:26.539133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:26.546257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:26.546784 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:22:26.546841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:22:26.546905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:26.556339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:26.556532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:26.565120 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:26.565300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:26.566525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:22:26.574769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:26.577153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:26.600746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:22:26.654015 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:22:26.663715 systemd-networkd[1220]: lo: Link UP May 17 00:22:26.663725 systemd-networkd[1220]: lo: Gained carrier May 17 00:22:26.667050 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:22:26.666454 systemd-networkd[1220]: Enumeration completed May 17 00:22:26.666657 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:22:26.670066 systemd-networkd[1220]: eth0: Configuring with /run/systemd/network/10-d2:c6:5f:95:eb:e2.network. May 17 00:22:26.670843 systemd-networkd[1220]: eth1: Configuring with /run/systemd/network/10-72:f2:30:14:01:d0.network. May 17 00:22:26.671359 systemd-networkd[1220]: eth0: Link UP May 17 00:22:26.671364 systemd-networkd[1220]: eth0: Gained carrier May 17 00:22:26.676225 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:22:26.677147 systemd-networkd[1220]: eth1: Link UP May 17 00:22:26.677157 systemd-networkd[1220]: eth1: Gained carrier May 17 00:22:26.684046 kernel: ACPI: button: Power Button [PWRF] May 17 00:22:26.690065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:22:26.714187 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:22:26.770035 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:22:26.788341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:26.794991 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 17 00:22:26.797003 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 17 00:22:26.807005 kernel: Console: switching to colour dummy device 80x25 May 17 00:22:26.809068 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:22:26.809225 kernel: [drm] features: -context_init May 17 00:22:26.814068 kernel: [drm] number of scanouts: 1 May 17 00:22:26.814196 kernel: [drm] number of cap sets: 0 May 17 00:22:26.819141 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 17 00:22:26.830437 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:22:26.830537 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:22:26.840088 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:22:26.846447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:26.846794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:26.862365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:26.880104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:26.880651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:26.897427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:26.994051 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:27.030058 kernel: EDAC MC: Ver: 3.0.0 May 17 00:22:27.060836 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:22:27.072450 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:22:27.089041 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:22:27.126706 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:22:27.128670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:27.136493 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:22:27.143577 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:22:27.173498 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:22:27.175142 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:22:27.184150 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 17 00:22:27.186248 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:22:27.186312 systemd[1]: Reached target machines.target - Containers. May 17 00:22:27.189102 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:22:27.208006 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:22:27.207923 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 17 00:22:27.210795 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:22:27.213259 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:22:27.221296 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:22:27.226305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:22:27.230265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:27.244418 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:22:27.249224 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:22:27.251890 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:22:27.262321 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:22:27.276275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:22:27.281926 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:22:27.296603 kernel: loop0: detected capacity change from 0 to 8 May 17 00:22:27.310810 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:22:27.332026 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:22:27.378305 kernel: loop2: detected capacity change from 0 to 140768 May 17 00:22:27.424018 kernel: loop3: detected capacity change from 0 to 221472 May 17 00:22:27.474031 kernel: loop4: detected capacity change from 0 to 8 May 17 00:22:27.479350 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:22:27.505139 kernel: loop6: detected capacity change from 0 to 140768 May 17 00:22:27.520940 kernel: loop7: detected capacity change from 0 to 221472 May 17 00:22:27.533049 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 17 00:22:27.533698 (sd-merge)[1309]: Merged extensions into '/usr'. May 17 00:22:27.543730 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:22:27.544070 systemd[1]: Reloading... May 17 00:22:27.627149 zram_generator::config[1336]: No configuration found. May 17 00:22:27.832736 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:22:27.869136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:27.945294 systemd[1]: Reloading finished in 400 ms. May 17 00:22:27.964748 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:22:27.966150 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:22:27.978241 systemd[1]: Starting ensure-sysext.service... May 17 00:22:27.984170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:22:27.991572 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... May 17 00:22:27.991592 systemd[1]: Reloading... May 17 00:22:28.029352 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:22:28.029827 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:22:28.030987 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:22:28.031359 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. May 17 00:22:28.031446 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. May 17 00:22:28.035003 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:22:28.035017 systemd-tmpfiles[1388]: Skipping /boot May 17 00:22:28.049791 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:22:28.049810 systemd-tmpfiles[1388]: Skipping /boot May 17 00:22:28.109603 zram_generator::config[1420]: No configuration found. May 17 00:22:28.242933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:28.318335 systemd[1]: Reloading finished in 326 ms. May 17 00:22:28.348745 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:28.357208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:22:28.362283 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:22:28.373235 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:22:28.387521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:22:28.402242 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:22:28.417164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:28.417396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:28.427278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:28.440294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:28.447716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:28.450942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:28.453681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:28.465208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:28.466250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:28.468831 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:28.473140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:28.480134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:28.480671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:28.492708 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:22:28.498800 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:22:28.509524 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:22:28.528821 systemd[1]: Finished ensure-sysext.service. May 17 00:22:28.534328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:28.534771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:28.540347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:28.545505 augenrules[1506]: No rules May 17 00:22:28.555875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:22:28.562257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:28.579437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:28.585260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:28.600286 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:22:28.602138 systemd-networkd[1220]: eth0: Gained IPv6LL May 17 00:22:28.620201 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:22:28.625279 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:22:28.625334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:28.628360 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:22:28.631133 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:22:28.633528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:28.633794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:28.635531 systemd-resolved[1477]: Positive Trust Anchors: May 17 00:22:28.636064 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:22:28.636230 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:22:28.637612 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:22:28.637837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:22:28.638903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:28.639877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:28.643336 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:28.643596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:28.652088 systemd-resolved[1477]: Using system hostname 'ci-4081.3.3-n-2d1cdc348f'. May 17 00:22:28.657938 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:22:28.661680 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:22:28.664270 systemd-networkd[1220]: eth1: Gained IPv6LL May 17 00:22:28.665746 systemd[1]: Reached target network.target - Network. May 17 00:22:28.667089 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:22:28.667695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:28.668344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:22:28.668446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:22:28.723308 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:22:28.724005 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:22:28.724700 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:22:28.726726 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:22:28.727442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:22:28.729131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:22:28.729183 systemd[1]: Reached target paths.target - Path Units. May 17 00:22:28.730799 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:22:29.511867 systemd-resolved[1477]: Clock change detected. Flushing caches. May 17 00:22:29.512022 systemd-timesyncd[1519]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). May 17 00:22:29.512078 systemd-timesyncd[1519]: Initial clock synchronization to Sat 2025-05-17 00:22:29.511806 UTC. May 17 00:22:29.515367 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:22:29.516247 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:22:29.518140 systemd[1]: Reached target timers.target - Timer Units. May 17 00:22:29.520390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:22:29.524291 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:22:29.527920 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:22:29.535210 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:22:29.535966 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:22:29.537276 systemd[1]: Reached target basic.target - Basic System. May 17 00:22:29.538866 systemd[1]: System is tainted: cgroupsv1 May 17 00:22:29.538926 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:22:29.538951 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:22:29.545620 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:22:29.550989 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:22:29.555660 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:22:29.564608 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:22:29.573714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:22:29.575502 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:22:29.582006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:29.586458 jq[1539]: false May 17 00:22:29.595635 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:22:29.606678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:22:29.607995 dbus-daemon[1538]: [system] SELinux support is enabled May 17 00:22:29.618636 coreos-metadata[1537]: May 17 00:22:29.618 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:22:29.622588 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:22:29.633664 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:22:29.640659 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:22:29.648918 extend-filesystems[1540]: Found loop4 May 17 00:22:29.654498 extend-filesystems[1540]: Found loop5 May 17 00:22:29.654498 extend-filesystems[1540]: Found loop6 May 17 00:22:29.654498 extend-filesystems[1540]: Found loop7 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda May 17 00:22:29.654498 extend-filesystems[1540]: Found vda1 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda2 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda3 May 17 00:22:29.654498 extend-filesystems[1540]: Found usr May 17 00:22:29.654498 extend-filesystems[1540]: Found vda4 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda6 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda7 May 17 00:22:29.654498 extend-filesystems[1540]: Found vda9 May 17 00:22:29.654498 extend-filesystems[1540]: Checking size of /dev/vda9 May 17 00:22:29.688800 coreos-metadata[1537]: May 17 00:22:29.657 INFO Fetch successful May 17 00:22:29.657740 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:22:29.668207 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:22:29.682941 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:22:29.701546 extend-filesystems[1540]: Resized partition /dev/vda9 May 17 00:22:29.699836 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:22:29.704086 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:22:29.717764 extend-filesystems[1570]: resize2fs 1.47.1 (20-May-2024) May 17 00:22:29.721977 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:22:29.727021 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:22:29.727311 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:22:29.741563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:22:29.741858 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:22:29.744559 update_engine[1565]: I20250517 00:22:29.744328 1565 main.cc:92] Flatcar Update Engine starting May 17 00:22:29.750126 update_engine[1565]: I20250517 00:22:29.748124 1565 update_check_scheduler.cc:74] Next update check in 9m10s May 17 00:22:29.750453 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1221) May 17 00:22:29.767928 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:22:29.768257 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:22:29.807973 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:22:29.808122 jq[1566]: true May 17 00:22:29.821477 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:22:29.829503 extend-filesystems[1570]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:22:29.829503 extend-filesystems[1570]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:22:29.829503 extend-filesystems[1570]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:22:29.848772 extend-filesystems[1540]: Resized filesystem in /dev/vda9 May 17 00:22:29.848772 extend-filesystems[1540]: Found vdb May 17 00:22:29.851348 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:22:29.851676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:22:29.881381 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:22:29.904559 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:22:29.909906 tar[1578]: linux-amd64/helm May 17 00:22:29.920710 jq[1591]: true May 17 00:22:29.929213 systemd[1]: Started update-engine.service - Update Engine. May 17 00:22:29.950319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:22:29.951247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:22:29.951317 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:22:29.952023 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:22:29.952163 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 17 00:22:29.952194 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:22:29.954316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:22:29.956659 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:22:30.089403 systemd-logind[1555]: New seat seat0. May 17 00:22:30.094514 bash[1624]: Updated "/home/core/.ssh/authorized_keys" May 17 00:22:30.097731 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:22:30.124661 systemd[1]: Starting sshkeys.service... May 17 00:22:30.130113 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:22:30.130151 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:22:30.134826 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:22:30.194158 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:22:30.222566 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:22:30.320615 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:22:30.370108 coreos-metadata[1631]: May 17 00:22:30.370 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:22:30.392534 coreos-metadata[1631]: May 17 00:22:30.391 INFO Fetch successful May 17 00:22:30.413440 unknown[1631]: wrote ssh authorized keys file for user: core May 17 00:22:30.468038 update-ssh-keys[1641]: Updated "/home/core/.ssh/authorized_keys" May 17 00:22:30.469548 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:22:30.478619 systemd[1]: Finished sshkeys.service. May 17 00:22:30.568069 containerd[1593]: time="2025-05-17T00:22:30.567931850Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:22:30.665861 containerd[1593]: time="2025-05-17T00:22:30.665431781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.673577 containerd[1593]: time="2025-05-17T00:22:30.673493743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:30.673577 containerd[1593]: time="2025-05-17T00:22:30.673572058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:22:30.673723 containerd[1593]: time="2025-05-17T00:22:30.673610977Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:22:30.673856 containerd[1593]: time="2025-05-17T00:22:30.673834012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:22:30.673881 containerd[1593]: time="2025-05-17T00:22:30.673869710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.673981 containerd[1593]: time="2025-05-17T00:22:30.673952356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:30.674007 containerd[1593]: time="2025-05-17T00:22:30.673977110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.677603 containerd[1593]: time="2025-05-17T00:22:30.674364519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:30.677603 containerd[1593]: time="2025-05-17T00:22:30.674403739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.677603 containerd[1593]: time="2025-05-17T00:22:30.677496440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:30.677603 containerd[1593]: time="2025-05-17T00:22:30.677542318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.677847 containerd[1593]: time="2025-05-17T00:22:30.677727751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.678147 containerd[1593]: time="2025-05-17T00:22:30.678109042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:30.680605 containerd[1593]: time="2025-05-17T00:22:30.680548451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:30.680605 containerd[1593]: time="2025-05-17T00:22:30.680600219Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:22:30.680816 containerd[1593]: time="2025-05-17T00:22:30.680787171Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:22:30.680901 containerd[1593]: time="2025-05-17T00:22:30.680867589Z" level=info msg="metadata content store policy set" policy=shared May 17 00:22:30.686628 containerd[1593]: time="2025-05-17T00:22:30.686556778Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:22:30.686758 containerd[1593]: time="2025-05-17T00:22:30.686673782Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:22:30.686758 containerd[1593]: time="2025-05-17T00:22:30.686701557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:22:30.686802 containerd[1593]: time="2025-05-17T00:22:30.686763175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:22:30.686802 containerd[1593]: time="2025-05-17T00:22:30.686789388Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:22:30.688064 containerd[1593]: time="2025-05-17T00:22:30.687046482Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.689756666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690050916Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690083861Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690125576Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690163866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690186771Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690205949Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690238241Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690263484Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690283181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690302086Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690321348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690350037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:22:30.690774 containerd[1593]: time="2025-05-17T00:22:30.690383897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.690406231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.691906429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.691937706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.691962949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.692008033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.692033536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:22:30.692084 containerd[1593]: time="2025-05-17T00:22:30.692077068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692104837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692134610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692156419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692175357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692213550Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692251376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692274402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692290450Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692352495Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692379650Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692397506Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692509573Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:22:30.694065 containerd[1593]: time="2025-05-17T00:22:30.692528276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694338 containerd[1593]: time="2025-05-17T00:22:30.692652590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:22:30.694338 containerd[1593]: time="2025-05-17T00:22:30.692683177Z" level=info msg="NRI interface is disabled by configuration." May 17 00:22:30.694338 containerd[1593]: time="2025-05-17T00:22:30.692714310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:22:30.694409 containerd[1593]: time="2025-05-17T00:22:30.693114560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:22:30.694409 containerd[1593]: time="2025-05-17T00:22:30.693236123Z" level=info msg="Connect containerd service" May 17 00:22:30.694409 containerd[1593]: time="2025-05-17T00:22:30.693293625Z" level=info msg="using legacy CRI server" May 17 00:22:30.694409 containerd[1593]: time="2025-05-17T00:22:30.693304181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:22:30.694776 containerd[1593]: time="2025-05-17T00:22:30.694535741Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697048920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697534453Z" level=info msg="Start subscribing containerd event" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697622277Z" level=info msg="Start recovering state" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697722006Z" level=info msg="Start event monitor" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697738225Z" level=info msg="Start snapshots syncer" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697754461Z" level=info msg="Start cni network conf syncer for default" May 17 00:22:30.700444 containerd[1593]: time="2025-05-17T00:22:30.697764942Z" level=info msg="Start streaming server" May 17 00:22:30.700868 containerd[1593]: time="2025-05-17T00:22:30.700826845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:22:30.700950 containerd[1593]: time="2025-05-17T00:22:30.700928297Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:22:30.702958 containerd[1593]: time="2025-05-17T00:22:30.701010145Z" level=info msg="containerd successfully booted in 0.137415s" May 17 00:22:30.701185 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:22:31.258871 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:22:31.280508 tar[1578]: linux-amd64/LICENSE May 17 00:22:31.282439 tar[1578]: linux-amd64/README.md May 17 00:22:31.308017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:22:31.318107 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:22:31.331328 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:22:31.347545 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:22:31.347802 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:22:31.361307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:22:31.391763 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:22:31.401969 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:22:31.406825 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:22:31.411848 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:22:31.519730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:31.524788 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:22:31.528666 systemd[1]: Startup finished in 6.499s (kernel) + 6.525s (userspace) = 13.024s. May 17 00:22:31.532115 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:32.201886 kubelet[1685]: E0517 00:22:32.201825 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:32.204686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:32.204927 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:32.474501 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:22:32.487216 systemd[1]: Started sshd@0-134.199.214.88:22-139.178.68.195:60142.service - OpenSSH per-connection server daemon (139.178.68.195:60142). May 17 00:22:32.552296 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 60142 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:32.555199 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:32.569278 systemd-logind[1555]: New session 1 of user core. May 17 00:22:32.570299 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:22:32.579811 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:22:32.603710 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:22:32.613788 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:22:32.618876 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:22:32.749712 systemd[1703]: Queued start job for default target default.target. May 17 00:22:32.750616 systemd[1703]: Created slice app.slice - User Application Slice. May 17 00:22:32.750652 systemd[1703]: Reached target paths.target - Paths. May 17 00:22:32.750671 systemd[1703]: Reached target timers.target - Timers. May 17 00:22:32.759639 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:22:32.768471 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:22:32.768569 systemd[1703]: Reached target sockets.target - Sockets. May 17 00:22:32.768591 systemd[1703]: Reached target basic.target - Basic System. May 17 00:22:32.768657 systemd[1703]: Reached target default.target - Main User Target. May 17 00:22:32.768700 systemd[1703]: Startup finished in 141ms. May 17 00:22:32.768850 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:22:32.782294 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:22:32.849790 systemd[1]: Started sshd@1-134.199.214.88:22-139.178.68.195:60144.service - OpenSSH per-connection server daemon (139.178.68.195:60144). May 17 00:22:32.897542 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 60144 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:32.899761 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:32.905900 systemd-logind[1555]: New session 2 of user core. May 17 00:22:32.916981 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:22:32.983442 sshd[1715]: pam_unix(sshd:session): session closed for user core May 17 00:22:32.993846 systemd[1]: Started sshd@2-134.199.214.88:22-139.178.68.195:60156.service - OpenSSH per-connection server daemon (139.178.68.195:60156). May 17 00:22:32.995140 systemd[1]: sshd@1-134.199.214.88:22-139.178.68.195:60144.service: Deactivated successfully. May 17 00:22:32.998122 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:22:33.000363 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. May 17 00:22:33.002945 systemd-logind[1555]: Removed session 2. May 17 00:22:33.040582 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 60156 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:33.042227 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:33.049087 systemd-logind[1555]: New session 3 of user core. May 17 00:22:33.057832 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:22:33.117686 sshd[1721]: pam_unix(sshd:session): session closed for user core May 17 00:22:33.130837 systemd[1]: Started sshd@3-134.199.214.88:22-139.178.68.195:60158.service - OpenSSH per-connection server daemon (139.178.68.195:60158). May 17 00:22:33.131323 systemd[1]: sshd@2-134.199.214.88:22-139.178.68.195:60156.service: Deactivated successfully. May 17 00:22:33.138347 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:22:33.140554 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. May 17 00:22:33.142279 systemd-logind[1555]: Removed session 3. May 17 00:22:33.171485 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 60158 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:33.173560 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:33.179129 systemd-logind[1555]: New session 4 of user core. May 17 00:22:33.188853 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:22:33.256249 sshd[1728]: pam_unix(sshd:session): session closed for user core May 17 00:22:33.261584 systemd[1]: sshd@3-134.199.214.88:22-139.178.68.195:60158.service: Deactivated successfully. May 17 00:22:33.266111 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:22:33.268091 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. May 17 00:22:33.272958 systemd[1]: Started sshd@4-134.199.214.88:22-139.178.68.195:60172.service - OpenSSH per-connection server daemon (139.178.68.195:60172). May 17 00:22:33.276183 systemd-logind[1555]: Removed session 4. May 17 00:22:33.323108 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 60172 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:33.325307 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:33.331044 systemd-logind[1555]: New session 5 of user core. May 17 00:22:33.342972 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:22:33.415168 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:22:33.415517 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:22:33.427200 sudo[1743]: pam_unix(sudo:session): session closed for user root May 17 00:22:33.431714 sshd[1739]: pam_unix(sshd:session): session closed for user core May 17 00:22:33.439907 systemd[1]: Started sshd@5-134.199.214.88:22-139.178.68.195:60188.service - OpenSSH per-connection server daemon (139.178.68.195:60188). May 17 00:22:33.440596 systemd[1]: sshd@4-134.199.214.88:22-139.178.68.195:60172.service: Deactivated successfully. May 17 00:22:33.444307 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:22:33.446702 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. May 17 00:22:33.449453 systemd-logind[1555]: Removed session 5. May 17 00:22:33.493374 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 60188 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:33.495171 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:33.501099 systemd-logind[1555]: New session 6 of user core. May 17 00:22:33.516917 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:22:33.581072 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:22:33.581469 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:22:33.585927 sudo[1753]: pam_unix(sudo:session): session closed for user root May 17 00:22:33.593347 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:22:33.594038 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:22:33.616906 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:22:33.619512 auditctl[1756]: No rules May 17 00:22:33.621269 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:22:33.622104 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:22:33.634970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:22:33.668277 augenrules[1775]: No rules May 17 00:22:33.669768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:22:33.672322 sudo[1752]: pam_unix(sudo:session): session closed for user root May 17 00:22:33.679511 sshd[1746]: pam_unix(sshd:session): session closed for user core May 17 00:22:33.687007 systemd[1]: Started sshd@6-134.199.214.88:22-139.178.68.195:39764.service - OpenSSH per-connection server daemon (139.178.68.195:39764). May 17 00:22:33.688019 systemd[1]: sshd@5-134.199.214.88:22-139.178.68.195:60188.service: Deactivated successfully. May 17 00:22:33.691665 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:22:33.693616 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. May 17 00:22:33.696328 systemd-logind[1555]: Removed session 6. May 17 00:22:33.735233 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 39764 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:22:33.737253 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:33.743278 systemd-logind[1555]: New session 7 of user core. May 17 00:22:33.748027 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:22:33.809940 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:22:33.810247 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:22:34.281858 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:22:34.283859 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:22:34.736163 dockerd[1803]: time="2025-05-17T00:22:34.735992918Z" level=info msg="Starting up" May 17 00:22:34.861734 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport590126786-merged.mount: Deactivated successfully. May 17 00:22:34.964788 dockerd[1803]: time="2025-05-17T00:22:34.964723141Z" level=info msg="Loading containers: start." May 17 00:22:35.113574 kernel: Initializing XFRM netlink socket May 17 00:22:35.216325 systemd-networkd[1220]: docker0: Link UP May 17 00:22:35.237706 dockerd[1803]: time="2025-05-17T00:22:35.237521347Z" level=info msg="Loading containers: done." May 17 00:22:35.255681 dockerd[1803]: time="2025-05-17T00:22:35.255618185Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:22:35.255888 dockerd[1803]: time="2025-05-17T00:22:35.255749909Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:22:35.255888 dockerd[1803]: time="2025-05-17T00:22:35.255873037Z" level=info msg="Daemon has completed initialization" May 17 00:22:35.292336 dockerd[1803]: time="2025-05-17T00:22:35.292234479Z" level=info msg="API listen on /run/docker.sock" May 17 00:22:35.292785 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:22:36.139901 containerd[1593]: time="2025-05-17T00:22:36.139842137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:22:36.763282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990184258.mount: Deactivated successfully. May 17 00:22:37.918536 containerd[1593]: time="2025-05-17T00:22:37.917520468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:37.919804 containerd[1593]: time="2025-05-17T00:22:37.919426143Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:22:37.920504 containerd[1593]: time="2025-05-17T00:22:37.920465591Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:37.924694 containerd[1593]: time="2025-05-17T00:22:37.924637759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:37.926780 containerd[1593]: time="2025-05-17T00:22:37.926717028Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.786816195s" May 17 00:22:37.926780 containerd[1593]: time="2025-05-17T00:22:37.926779424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:22:37.927560 containerd[1593]: time="2025-05-17T00:22:37.927530287Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:22:39.243459 containerd[1593]: time="2025-05-17T00:22:39.243221595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:39.245578 containerd[1593]: time="2025-05-17T00:22:39.245488137Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:22:39.246564 containerd[1593]: time="2025-05-17T00:22:39.246239074Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:39.248977 containerd[1593]: time="2025-05-17T00:22:39.248926994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:39.251030 containerd[1593]: time="2025-05-17T00:22:39.250976712Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.32340294s" May 17 00:22:39.251277 containerd[1593]: time="2025-05-17T00:22:39.251237591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:22:39.251931 containerd[1593]: time="2025-05-17T00:22:39.251899689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:22:40.427788 containerd[1593]: time="2025-05-17T00:22:40.427716609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:40.428810 containerd[1593]: time="2025-05-17T00:22:40.428762359Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:22:40.429752 containerd[1593]: time="2025-05-17T00:22:40.429694913Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:40.433902 containerd[1593]: time="2025-05-17T00:22:40.433829993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:40.435811 containerd[1593]: time="2025-05-17T00:22:40.434857324Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.182922145s" May 17 00:22:40.435811 containerd[1593]: time="2025-05-17T00:22:40.434903931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:22:40.435811 containerd[1593]: time="2025-05-17T00:22:40.435472662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:22:41.538244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132206448.mount: Deactivated successfully. May 17 00:22:42.129248 containerd[1593]: time="2025-05-17T00:22:42.129177145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.130325 containerd[1593]: time="2025-05-17T00:22:42.130273637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:22:42.130982 containerd[1593]: time="2025-05-17T00:22:42.130894607Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.133167 containerd[1593]: time="2025-05-17T00:22:42.133099858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.134393 containerd[1593]: time="2025-05-17T00:22:42.133923906Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.698411489s" May 17 00:22:42.134393 containerd[1593]: time="2025-05-17T00:22:42.133981207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:22:42.134787 containerd[1593]: time="2025-05-17T00:22:42.134748317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:22:42.456276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:22:42.457183 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 17 00:22:42.462751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:42.602646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348399476.mount: Deactivated successfully. May 17 00:22:42.645290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:42.655324 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:42.736306 kubelet[2038]: E0517 00:22:42.735351 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:42.740391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:42.740591 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:43.421599 containerd[1593]: time="2025-05-17T00:22:43.421522279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.423628 containerd[1593]: time="2025-05-17T00:22:43.423283838Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:22:43.424728 containerd[1593]: time="2025-05-17T00:22:43.424630608Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.428282 containerd[1593]: time="2025-05-17T00:22:43.427714027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.429063 containerd[1593]: time="2025-05-17T00:22:43.429024523Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.294235254s" May 17 00:22:43.429063 containerd[1593]: time="2025-05-17T00:22:43.429065252Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:22:43.429897 containerd[1593]: time="2025-05-17T00:22:43.429865313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:22:43.885947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178025100.mount: Deactivated successfully. May 17 00:22:43.892433 containerd[1593]: time="2025-05-17T00:22:43.891347687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.892433 containerd[1593]: time="2025-05-17T00:22:43.892266325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:22:43.892433 containerd[1593]: time="2025-05-17T00:22:43.892363934Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.894686 containerd[1593]: time="2025-05-17T00:22:43.894647425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:43.895537 containerd[1593]: time="2025-05-17T00:22:43.895499853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 465.604437ms" May 17 00:22:43.895537 containerd[1593]: time="2025-05-17T00:22:43.895534562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:22:43.896831 containerd[1593]: time="2025-05-17T00:22:43.896791981Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:22:44.397677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525151373.mount: Deactivated successfully. May 17 00:22:45.508651 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 17 00:22:46.267689 containerd[1593]: time="2025-05-17T00:22:46.267620285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.268961 containerd[1593]: time="2025-05-17T00:22:46.268895947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:22:46.269877 containerd[1593]: time="2025-05-17T00:22:46.269536919Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.274071 containerd[1593]: time="2025-05-17T00:22:46.273979681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.276164 containerd[1593]: time="2025-05-17T00:22:46.275899155Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.379061443s" May 17 00:22:46.276164 containerd[1593]: time="2025-05-17T00:22:46.275965003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:22:49.405391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:49.413196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:49.460524 systemd[1]: Reloading requested from client PID 2178 ('systemctl') (unit session-7.scope)... May 17 00:22:49.460756 systemd[1]: Reloading... May 17 00:22:49.616450 zram_generator::config[2218]: No configuration found. May 17 00:22:49.764735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:49.848669 systemd[1]: Reloading finished in 385 ms. May 17 00:22:49.908915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:22:49.909012 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:22:49.909456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:49.920390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:50.074732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:50.087166 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:22:50.141959 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:22:50.142392 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:22:50.142454 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:22:50.142673 kubelet[2284]: I0517 00:22:50.142615 2284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:22:50.562259 kubelet[2284]: I0517 00:22:50.562188 2284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:22:50.562259 kubelet[2284]: I0517 00:22:50.562241 2284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:22:50.562733 kubelet[2284]: I0517 00:22:50.562697 2284 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:22:50.586266 kubelet[2284]: E0517 00:22:50.586078 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.214.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:50.586266 kubelet[2284]: I0517 00:22:50.586092 2284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:22:50.597308 kubelet[2284]: E0517 00:22:50.597249 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:22:50.597597 kubelet[2284]: I0517 00:22:50.597467 2284 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:22:50.612682 kubelet[2284]: I0517 00:22:50.610027 2284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:22:50.612682 kubelet[2284]: I0517 00:22:50.610376 2284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:22:50.612682 kubelet[2284]: I0517 00:22:50.610506 2284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:22:50.612682 kubelet[2284]: I0517 00:22:50.610533 2284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-2d1cdc348f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:22:50.612977 kubelet[2284]: I0517 00:22:50.610883 2284 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:22:50.612977 kubelet[2284]: I0517 00:22:50.610899 2284 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:22:50.612977 kubelet[2284]: I0517 00:22:50.611061 2284 state_mem.go:36] "Initialized new in-memory state store" May 17 00:22:50.617407 kubelet[2284]: I0517 00:22:50.617352 2284 kubelet.go:408] "Attempting to sync node with API server" May 17 00:22:50.617407 kubelet[2284]: I0517 00:22:50.617434 2284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:22:50.617709 kubelet[2284]: I0517 00:22:50.617478 2284 kubelet.go:314] "Adding apiserver pod source" May 17 00:22:50.617709 kubelet[2284]: I0517 00:22:50.617504 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:22:50.622805 kubelet[2284]: W0517 00:22:50.621878 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.214.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-2d1cdc348f&limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:50.622805 kubelet[2284]: E0517 00:22:50.621966 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.214.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-2d1cdc348f&limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:50.623961 kubelet[2284]: I0517 00:22:50.623810 2284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:22:50.627098 kubelet[2284]: I0517 00:22:50.626959 2284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:22:50.628365 kubelet[2284]: W0517 00:22:50.627920 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:22:50.629915 kubelet[2284]: I0517 00:22:50.629595 2284 server.go:1274] "Started kubelet" May 17 00:22:50.629915 kubelet[2284]: W0517 00:22:50.629761 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.214.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:50.629915 kubelet[2284]: E0517 00:22:50.629813 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.214.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:50.631442 kubelet[2284]: I0517 00:22:50.630893 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:22:50.631442 kubelet[2284]: I0517 00:22:50.631137 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:22:50.632087 kubelet[2284]: I0517 00:22:50.632066 2284 server.go:449] "Adding debug handlers to kubelet server" May 17 00:22:50.635192 kubelet[2284]: I0517 00:22:50.635148 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:22:50.635577 kubelet[2284]: I0517 00:22:50.635558 2284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:22:50.638481 kubelet[2284]: I0517 00:22:50.638411 2284 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:22:50.639056 kubelet[2284]: E0517 00:22:50.639027 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-2d1cdc348f\" not found" May 17 00:22:50.641888 kubelet[2284]: I0517 00:22:50.641858 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:22:50.642925 kubelet[2284]: I0517 00:22:50.642901 2284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:22:50.644363 kubelet[2284]: I0517 00:22:50.643022 2284 reconciler.go:26] "Reconciler: start to sync state" May 17 00:22:50.652473 kubelet[2284]: I0517 00:22:50.652335 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:22:50.653907 kubelet[2284]: I0517 00:22:50.653806 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:22:50.653907 kubelet[2284]: I0517 00:22:50.653848 2284 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:22:50.653907 kubelet[2284]: I0517 00:22:50.653874 2284 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:22:50.654065 kubelet[2284]: E0517 00:22:50.653936 2284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:22:50.656886 kubelet[2284]: E0517 00:22:50.655450 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.214.88:6443/api/v1/namespaces/default/events\": dial tcp 134.199.214.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-2d1cdc348f.184028ab1231f567 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-2d1cdc348f,UID:ci-4081.3.3-n-2d1cdc348f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-2d1cdc348f,},FirstTimestamp:2025-05-17 00:22:50.629535079 +0000 UTC m=+0.537476286,LastTimestamp:2025-05-17 00:22:50.629535079 +0000 UTC m=+0.537476286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-2d1cdc348f,}" May 17 00:22:50.656886 kubelet[2284]: E0517 00:22:50.656799 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.214.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-2d1cdc348f?timeout=10s\": dial tcp 134.199.214.88:6443: connect: connection refused" interval="200ms" May 17 00:22:50.657240 kubelet[2284]: I0517 00:22:50.657096 2284 factory.go:221] Registration of the systemd container factory successfully May 17 00:22:50.657240 kubelet[2284]: I0517 00:22:50.657218 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:22:50.659446 kubelet[2284]: W0517 00:22:50.658950 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.214.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:50.659446 kubelet[2284]: E0517 00:22:50.659002 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.214.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:50.663947 kubelet[2284]: W0517 00:22:50.663890 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.214.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:50.664138 kubelet[2284]: E0517 00:22:50.664118 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.214.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:50.664549 kubelet[2284]: I0517 00:22:50.664528 2284 factory.go:221] Registration of the containerd container factory successfully May 17 00:22:50.685084 kubelet[2284]: E0517 00:22:50.685027 2284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:22:50.690002 kubelet[2284]: I0517 00:22:50.689806 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:22:50.690002 kubelet[2284]: I0517 00:22:50.689985 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:22:50.690002 kubelet[2284]: I0517 00:22:50.690007 2284 state_mem.go:36] "Initialized new in-memory state store" May 17 00:22:50.692203 kubelet[2284]: I0517 00:22:50.692168 2284 policy_none.go:49] "None policy: Start" May 17 00:22:50.693553 kubelet[2284]: I0517 00:22:50.693526 2284 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:22:50.694148 kubelet[2284]: I0517 00:22:50.693820 2284 state_mem.go:35] "Initializing new in-memory state store" May 17 00:22:50.699293 kubelet[2284]: I0517 00:22:50.699253 2284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:22:50.699796 kubelet[2284]: I0517 00:22:50.699779 2284 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:22:50.700049 kubelet[2284]: I0517 00:22:50.699958 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:22:50.702307 kubelet[2284]: I0517 00:22:50.702149 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:22:50.703932 kubelet[2284]: E0517 00:22:50.703479 2284 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-2d1cdc348f\" not found" May 17 00:22:50.801314 kubelet[2284]: I0517 00:22:50.801252 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.801674 kubelet[2284]: E0517 00:22:50.801650 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.214.88:6443/api/v1/nodes\": dial tcp 134.199.214.88:6443: connect: connection refused" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.857828 kubelet[2284]: E0517 00:22:50.857654 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.214.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-2d1cdc348f?timeout=10s\": dial tcp 134.199.214.88:6443: connect: connection refused" interval="400ms" May 17 00:22:50.944515 kubelet[2284]: I0517 00:22:50.944076 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944515 kubelet[2284]: I0517 00:22:50.944150 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944515 kubelet[2284]: I0517 00:22:50.944181 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944515 kubelet[2284]: I0517 00:22:50.944205 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f959af2469a870f17ba1488506b4bfa0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-2d1cdc348f\" (UID: \"f959af2469a870f17ba1488506b4bfa0\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944515 kubelet[2284]: I0517 00:22:50.944233 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944897 kubelet[2284]: I0517 00:22:50.944254 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944897 kubelet[2284]: I0517 00:22:50.944275 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944897 kubelet[2284]: I0517 00:22:50.944297 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:50.944897 kubelet[2284]: I0517 00:22:50.944322 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:51.003624 kubelet[2284]: I0517 00:22:51.003570 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:51.004157 kubelet[2284]: E0517 00:22:51.004121 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.214.88:6443/api/v1/nodes\": dial tcp 134.199.214.88:6443: connect: connection refused" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:51.061303 kubelet[2284]: E0517 00:22:51.061246 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.064323 containerd[1593]: time="2025-05-17T00:22:51.064059882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-2d1cdc348f,Uid:d81d8181aaea151db0227471cc713e0a,Namespace:kube-system,Attempt:0,}" May 17 00:22:51.065946 kubelet[2284]: E0517 00:22:51.065911 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.067027 systemd-resolved[1477]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 17 00:22:51.067451 kubelet[2284]: E0517 00:22:51.067392 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.071576 containerd[1593]: time="2025-05-17T00:22:51.071211563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-2d1cdc348f,Uid:1bcea2d2e94fc4278bb880def082d2b3,Namespace:kube-system,Attempt:0,}" May 17 00:22:51.071576 containerd[1593]: time="2025-05-17T00:22:51.071331020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-2d1cdc348f,Uid:f959af2469a870f17ba1488506b4bfa0,Namespace:kube-system,Attempt:0,}" May 17 00:22:51.258628 kubelet[2284]: E0517 00:22:51.258521 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.214.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-2d1cdc348f?timeout=10s\": dial tcp 134.199.214.88:6443: connect: connection refused" interval="800ms" May 17 00:22:51.330715 kubelet[2284]: E0517 00:22:51.330402 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.214.88:6443/api/v1/namespaces/default/events\": dial tcp 134.199.214.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-2d1cdc348f.184028ab1231f567 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-2d1cdc348f,UID:ci-4081.3.3-n-2d1cdc348f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-2d1cdc348f,},FirstTimestamp:2025-05-17 00:22:50.629535079 +0000 UTC m=+0.537476286,LastTimestamp:2025-05-17 00:22:50.629535079 +0000 UTC m=+0.537476286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-2d1cdc348f,}" May 17 00:22:51.405472 kubelet[2284]: I0517 00:22:51.405287 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:51.406098 kubelet[2284]: E0517 00:22:51.406036 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.214.88:6443/api/v1/nodes\": dial tcp 134.199.214.88:6443: connect: connection refused" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:51.556109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770950686.mount: Deactivated successfully. May 17 00:22:51.561429 containerd[1593]: time="2025-05-17T00:22:51.561357010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:22:51.562953 containerd[1593]: time="2025-05-17T00:22:51.562871645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:22:51.563854 containerd[1593]: time="2025-05-17T00:22:51.563814186Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:22:51.566023 containerd[1593]: time="2025-05-17T00:22:51.565811850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:22:51.566023 containerd[1593]: time="2025-05-17T00:22:51.565914773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:22:51.567514 containerd[1593]: time="2025-05-17T00:22:51.567471561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:22:51.567760 containerd[1593]: time="2025-05-17T00:22:51.567718144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:22:51.571446 containerd[1593]: time="2025-05-17T00:22:51.569550940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.150226ms" May 17 00:22:51.573369 containerd[1593]: time="2025-05-17T00:22:51.573316588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.007472ms" May 17 00:22:51.575026 containerd[1593]: time="2025-05-17T00:22:51.574983650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:22:51.576175 containerd[1593]: time="2025-05-17T00:22:51.576102471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.942242ms" May 17 00:22:51.597049 kubelet[2284]: W0517 00:22:51.596990 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.214.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-2d1cdc348f&limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:51.597211 kubelet[2284]: E0517 00:22:51.597194 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.214.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-2d1cdc348f&limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:51.735233 containerd[1593]: time="2025-05-17T00:22:51.735042534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:51.736577 containerd[1593]: time="2025-05-17T00:22:51.736448401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:51.736724 containerd[1593]: time="2025-05-17T00:22:51.736530940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:51.736724 containerd[1593]: time="2025-05-17T00:22:51.736578457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.737309 containerd[1593]: time="2025-05-17T00:22:51.737230212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:51.737566 containerd[1593]: time="2025-05-17T00:22:51.737493921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.738399 containerd[1593]: time="2025-05-17T00:22:51.738291375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.738629 containerd[1593]: time="2025-05-17T00:22:51.738551595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.739538 containerd[1593]: time="2025-05-17T00:22:51.738994308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:51.739538 containerd[1593]: time="2025-05-17T00:22:51.739072832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:51.739538 containerd[1593]: time="2025-05-17T00:22:51.739095748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.741286 containerd[1593]: time="2025-05-17T00:22:51.741195446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:51.793670 kubelet[2284]: W0517 00:22:51.792664 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.214.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:51.795247 kubelet[2284]: E0517 00:22:51.794981 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.214.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:51.810552 kubelet[2284]: W0517 00:22:51.809601 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.214.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:51.812607 kubelet[2284]: E0517 00:22:51.812554 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.214.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:51.864787 containerd[1593]: time="2025-05-17T00:22:51.864473745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-2d1cdc348f,Uid:f959af2469a870f17ba1488506b4bfa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7db9c8e1b349ca00aa9e781b262403336f541d48e7a91e5542a849f7b30f14\"" May 17 00:22:51.867433 containerd[1593]: time="2025-05-17T00:22:51.865851319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-2d1cdc348f,Uid:1bcea2d2e94fc4278bb880def082d2b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"df0eb8735890a90f6c5a53289bee64e0861577a52faf71a8b56bd99752e02ef6\"" May 17 00:22:51.868664 kubelet[2284]: E0517 00:22:51.868329 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.871284 kubelet[2284]: E0517 00:22:51.871257 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.877604 containerd[1593]: time="2025-05-17T00:22:51.877428323Z" level=info msg="CreateContainer within sandbox \"df0eb8735890a90f6c5a53289bee64e0861577a52faf71a8b56bd99752e02ef6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:22:51.879037 containerd[1593]: time="2025-05-17T00:22:51.879008358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-2d1cdc348f,Uid:d81d8181aaea151db0227471cc713e0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd4688bf9e5953257d36e55f4f3c120c1051af8d6bed1395aa35a3efa93e16f5\"" May 17 00:22:51.880103 containerd[1593]: time="2025-05-17T00:22:51.880030183Z" level=info msg="CreateContainer within sandbox \"1a7db9c8e1b349ca00aa9e781b262403336f541d48e7a91e5542a849f7b30f14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:22:51.880837 kubelet[2284]: E0517 00:22:51.880638 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:51.883858 containerd[1593]: time="2025-05-17T00:22:51.883812545Z" level=info msg="CreateContainer within sandbox \"cd4688bf9e5953257d36e55f4f3c120c1051af8d6bed1395aa35a3efa93e16f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:22:51.895857 containerd[1593]: time="2025-05-17T00:22:51.895684854Z" level=info msg="CreateContainer within sandbox \"1a7db9c8e1b349ca00aa9e781b262403336f541d48e7a91e5542a849f7b30f14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"95fed5d52ee7a401ea23e8690f2ccf269740344642ef840e10506d0dca89a4da\"" May 17 00:22:51.896768 containerd[1593]: time="2025-05-17T00:22:51.896722962Z" level=info msg="StartContainer for \"95fed5d52ee7a401ea23e8690f2ccf269740344642ef840e10506d0dca89a4da\"" May 17 00:22:51.900633 containerd[1593]: time="2025-05-17T00:22:51.900590795Z" level=info msg="CreateContainer within sandbox \"df0eb8735890a90f6c5a53289bee64e0861577a52faf71a8b56bd99752e02ef6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc8959c27ee02c08493b73c7a68178bf5a052f164b9e8f078c6e5e50a2fa2dd0\"" May 17 00:22:51.901494 containerd[1593]: time="2025-05-17T00:22:51.901399583Z" level=info msg="CreateContainer within sandbox \"cd4688bf9e5953257d36e55f4f3c120c1051af8d6bed1395aa35a3efa93e16f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ab794793a20d7303b7e72b84bcd62a7ef9c3db5cde6ed6ad555dd3aeeadcca6\"" May 17 00:22:51.901964 containerd[1593]: time="2025-05-17T00:22:51.901668504Z" level=info msg="StartContainer for \"dc8959c27ee02c08493b73c7a68178bf5a052f164b9e8f078c6e5e50a2fa2dd0\"" May 17 00:22:51.902371 containerd[1593]: time="2025-05-17T00:22:51.902218408Z" level=info msg="StartContainer for \"3ab794793a20d7303b7e72b84bcd62a7ef9c3db5cde6ed6ad555dd3aeeadcca6\"" May 17 00:22:51.953814 kubelet[2284]: W0517 00:22:51.952533 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.214.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.214.88:6443: connect: connection refused May 17 00:22:51.953814 kubelet[2284]: E0517 00:22:51.952730 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.214.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.214.88:6443: connect: connection refused" logger="UnhandledError" May 17 00:22:52.031787 containerd[1593]: time="2025-05-17T00:22:52.031634682Z" level=info msg="StartContainer for \"3ab794793a20d7303b7e72b84bcd62a7ef9c3db5cde6ed6ad555dd3aeeadcca6\" returns successfully" May 17 00:22:52.041486 containerd[1593]: time="2025-05-17T00:22:52.041101401Z" level=info msg="StartContainer for \"dc8959c27ee02c08493b73c7a68178bf5a052f164b9e8f078c6e5e50a2fa2dd0\" returns successfully" May 17 00:22:52.064265 kubelet[2284]: E0517 00:22:52.062249 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.214.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-2d1cdc348f?timeout=10s\": dial tcp 134.199.214.88:6443: connect: connection refused" interval="1.6s" May 17 00:22:52.074600 containerd[1593]: time="2025-05-17T00:22:52.074525708Z" level=info msg="StartContainer for \"95fed5d52ee7a401ea23e8690f2ccf269740344642ef840e10506d0dca89a4da\" returns successfully" May 17 00:22:52.208492 kubelet[2284]: I0517 00:22:52.208041 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:52.208492 kubelet[2284]: E0517 00:22:52.208451 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.214.88:6443/api/v1/nodes\": dial tcp 134.199.214.88:6443: connect: connection refused" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:52.696067 kubelet[2284]: E0517 00:22:52.695939 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:52.702646 kubelet[2284]: E0517 00:22:52.702104 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:52.713889 kubelet[2284]: E0517 00:22:52.713838 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:53.718792 kubelet[2284]: E0517 00:22:53.718738 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:53.812505 kubelet[2284]: I0517 00:22:53.811951 2284 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:54.010303 kubelet[2284]: E0517 00:22:54.010252 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-2d1cdc348f\" not found" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:54.070452 kubelet[2284]: I0517 00:22:54.068839 2284 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:54.070452 kubelet[2284]: E0517 00:22:54.068894 2284 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-2d1cdc348f\": node \"ci-4081.3.3-n-2d1cdc348f\" not found" May 17 00:22:54.633087 kubelet[2284]: I0517 00:22:54.632642 2284 apiserver.go:52] "Watching apiserver" May 17 00:22:54.643922 kubelet[2284]: I0517 00:22:54.643870 2284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:22:56.032131 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-7.scope)... May 17 00:22:56.032157 systemd[1]: Reloading... May 17 00:22:56.120449 zram_generator::config[2593]: No configuration found. May 17 00:22:56.278105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:56.368564 systemd[1]: Reloading finished in 335 ms. May 17 00:22:56.403504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:56.411026 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:22:56.411468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:56.421862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:56.567755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:56.587211 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:22:56.669594 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:22:56.669594 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:22:56.669594 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:22:56.670247 kubelet[2656]: I0517 00:22:56.670027 2656 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:22:56.679830 kubelet[2656]: I0517 00:22:56.679595 2656 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:22:56.679830 kubelet[2656]: I0517 00:22:56.679635 2656 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:22:56.680436 kubelet[2656]: I0517 00:22:56.680380 2656 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:22:56.682823 kubelet[2656]: I0517 00:22:56.682784 2656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:22:56.685486 kubelet[2656]: I0517 00:22:56.685102 2656 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:22:56.690168 kubelet[2656]: E0517 00:22:56.690121 2656 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:22:56.690565 kubelet[2656]: I0517 00:22:56.690543 2656 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:22:56.694491 kubelet[2656]: I0517 00:22:56.694133 2656 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:22:56.694776 kubelet[2656]: I0517 00:22:56.694645 2656 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:22:56.694854 kubelet[2656]: I0517 00:22:56.694783 2656 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:22:56.695070 kubelet[2656]: I0517 00:22:56.694856 2656 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-2d1cdc348f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:22:56.695169 kubelet[2656]: I0517 00:22:56.695085 2656 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:22:56.695169 kubelet[2656]: I0517 00:22:56.695101 2656 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:22:56.695169 kubelet[2656]: I0517 00:22:56.695144 2656 state_mem.go:36] "Initialized new in-memory state store" May 17 00:22:56.695330 kubelet[2656]: I0517 00:22:56.695318 2656 kubelet.go:408] "Attempting to sync node with API server" May 17 00:22:56.695379 kubelet[2656]: I0517 00:22:56.695334 2656 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:22:56.695379 kubelet[2656]: I0517 00:22:56.695364 2656 kubelet.go:314] "Adding apiserver pod source" May 17 00:22:56.695379 kubelet[2656]: I0517 00:22:56.695376 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:22:56.697176 kubelet[2656]: I0517 00:22:56.697148 2656 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:22:56.697797 kubelet[2656]: I0517 00:22:56.697691 2656 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:22:56.698165 kubelet[2656]: I0517 00:22:56.698134 2656 server.go:1274] "Started kubelet" May 17 00:22:56.702769 kubelet[2656]: I0517 00:22:56.700719 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:22:56.709672 kubelet[2656]: I0517 00:22:56.709577 2656 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:22:56.723162 kubelet[2656]: I0517 00:22:56.722350 2656 server.go:449] "Adding debug handlers to kubelet server" May 17 00:22:56.727864 kubelet[2656]: I0517 00:22:56.727798 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:22:56.728117 kubelet[2656]: I0517 00:22:56.728091 2656 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:22:56.728456 kubelet[2656]: I0517 00:22:56.728409 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:22:56.730365 kubelet[2656]: I0517 00:22:56.730325 2656 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:22:56.730775 kubelet[2656]: E0517 00:22:56.730738 2656 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-2d1cdc348f\" not found" May 17 00:22:56.732387 kubelet[2656]: I0517 00:22:56.732352 2656 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:22:56.733904 kubelet[2656]: I0517 00:22:56.733162 2656 reconciler.go:26] "Reconciler: start to sync state" May 17 00:22:56.735597 kubelet[2656]: I0517 00:22:56.735557 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:22:56.737256 kubelet[2656]: I0517 00:22:56.737227 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:22:56.737403 kubelet[2656]: I0517 00:22:56.737393 2656 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:22:56.737494 kubelet[2656]: I0517 00:22:56.737486 2656 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:22:56.737632 kubelet[2656]: E0517 00:22:56.737612 2656 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:22:56.749149 kubelet[2656]: I0517 00:22:56.749094 2656 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:22:56.757377 kubelet[2656]: I0517 00:22:56.756775 2656 factory.go:221] Registration of the containerd container factory successfully May 17 00:22:56.757377 kubelet[2656]: I0517 00:22:56.756798 2656 factory.go:221] Registration of the systemd container factory successfully May 17 00:22:56.773477 kubelet[2656]: E0517 00:22:56.773263 2656 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:22:56.838558 kubelet[2656]: E0517 00:22:56.838512 2656 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:22:56.846193 kubelet[2656]: I0517 00:22:56.846143 2656 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:22:56.846193 kubelet[2656]: I0517 00:22:56.846167 2656 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:22:56.846193 kubelet[2656]: I0517 00:22:56.846200 2656 state_mem.go:36] "Initialized new in-memory state store" May 17 00:22:56.846477 kubelet[2656]: I0517 00:22:56.846383 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:22:56.846477 kubelet[2656]: I0517 00:22:56.846392 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:22:56.846477 kubelet[2656]: I0517 00:22:56.846466 2656 policy_none.go:49] "None policy: Start" May 17 00:22:56.848069 kubelet[2656]: I0517 00:22:56.848037 2656 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:22:56.848069 kubelet[2656]: I0517 00:22:56.848077 2656 state_mem.go:35] "Initializing new in-memory state store" May 17 00:22:56.848267 kubelet[2656]: I0517 00:22:56.848254 2656 state_mem.go:75] "Updated machine memory state" May 17 00:22:56.850002 kubelet[2656]: I0517 00:22:56.849809 2656 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:22:56.850124 kubelet[2656]: I0517 00:22:56.850036 2656 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:22:56.850124 kubelet[2656]: I0517 00:22:56.850052 2656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:22:56.851393 kubelet[2656]: I0517 00:22:56.851223 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:22:56.960695 kubelet[2656]: I0517 00:22:56.958533 2656 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:56.969958 kubelet[2656]: I0517 00:22:56.969794 2656 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:56.970467 kubelet[2656]: I0517 00:22:56.970241 2656 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.051062 kubelet[2656]: W0517 00:22:57.050491 2656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:22:57.051062 kubelet[2656]: W0517 00:22:57.050796 2656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:22:57.051062 kubelet[2656]: W0517 00:22:57.050864 2656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:22:57.136468 kubelet[2656]: I0517 00:22:57.136122 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136468 kubelet[2656]: I0517 00:22:57.136181 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136468 kubelet[2656]: I0517 00:22:57.136207 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f959af2469a870f17ba1488506b4bfa0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-2d1cdc348f\" (UID: \"f959af2469a870f17ba1488506b4bfa0\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136468 kubelet[2656]: I0517 00:22:57.136234 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136468 kubelet[2656]: I0517 00:22:57.136250 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136852 kubelet[2656]: I0517 00:22:57.136264 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136852 kubelet[2656]: I0517 00:22:57.136283 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136852 kubelet[2656]: I0517 00:22:57.136303 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d81d8181aaea151db0227471cc713e0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-2d1cdc348f\" (UID: \"d81d8181aaea151db0227471cc713e0a\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.136852 kubelet[2656]: I0517 00:22:57.136323 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bcea2d2e94fc4278bb880def082d2b3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-2d1cdc348f\" (UID: \"1bcea2d2e94fc4278bb880def082d2b3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.352312 kubelet[2656]: E0517 00:22:57.352121 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.353990 kubelet[2656]: E0517 00:22:57.353676 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.353990 kubelet[2656]: E0517 00:22:57.353898 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.696666 kubelet[2656]: I0517 00:22:57.696525 2656 apiserver.go:52] "Watching apiserver" May 17 00:22:57.732607 kubelet[2656]: I0517 00:22:57.732540 2656 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:22:57.799234 kubelet[2656]: E0517 00:22:57.799175 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.799857 kubelet[2656]: E0517 00:22:57.799658 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.810047 kubelet[2656]: W0517 00:22:57.809916 2656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:22:57.811645 kubelet[2656]: E0517 00:22:57.810023 2656 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-2d1cdc348f\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-2d1cdc348f" May 17 00:22:57.814546 kubelet[2656]: E0517 00:22:57.813490 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:57.850400 kubelet[2656]: I0517 00:22:57.850201 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-2d1cdc348f" podStartSLOduration=0.850164401 podStartE2EDuration="850.164401ms" podCreationTimestamp="2025-05-17 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:57.838753785 +0000 UTC m=+1.242590846" watchObservedRunningTime="2025-05-17 00:22:57.850164401 +0000 UTC m=+1.254001463" May 17 00:22:57.851614 kubelet[2656]: I0517 00:22:57.850990 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-2d1cdc348f" podStartSLOduration=0.850967868 podStartE2EDuration="850.967868ms" podCreationTimestamp="2025-05-17 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:57.849892688 +0000 UTC m=+1.253729746" watchObservedRunningTime="2025-05-17 00:22:57.850967868 +0000 UTC m=+1.254804928" May 17 00:22:57.883401 kubelet[2656]: I0517 00:22:57.883203 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-2d1cdc348f" podStartSLOduration=0.883159873 podStartE2EDuration="883.159873ms" podCreationTimestamp="2025-05-17 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:57.867018944 +0000 UTC m=+1.270856015" watchObservedRunningTime="2025-05-17 00:22:57.883159873 +0000 UTC m=+1.286996928" May 17 00:22:58.799473 kubelet[2656]: E0517 00:22:58.799411 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:58.800876 kubelet[2656]: E0517 00:22:58.800325 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:59.424905 kubelet[2656]: E0517 00:22:59.423642 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:22:59.801449 kubelet[2656]: E0517 00:22:59.801322 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:01.943540 kubelet[2656]: I0517 00:23:01.943447 2656 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:23:01.944666 containerd[1593]: time="2025-05-17T00:23:01.944603758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:23:01.946085 kubelet[2656]: I0517 00:23:01.944976 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:23:02.976681 kubelet[2656]: I0517 00:23:02.976607 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f267352d-8e81-4980-a43d-6b003abfe54c-lib-modules\") pod \"kube-proxy-b6kcr\" (UID: \"f267352d-8e81-4980-a43d-6b003abfe54c\") " pod="kube-system/kube-proxy-b6kcr" May 17 00:23:02.976681 kubelet[2656]: I0517 00:23:02.976670 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f267352d-8e81-4980-a43d-6b003abfe54c-kube-proxy\") pod \"kube-proxy-b6kcr\" (UID: \"f267352d-8e81-4980-a43d-6b003abfe54c\") " pod="kube-system/kube-proxy-b6kcr" May 17 00:23:02.976681 kubelet[2656]: I0517 00:23:02.976696 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f267352d-8e81-4980-a43d-6b003abfe54c-xtables-lock\") pod \"kube-proxy-b6kcr\" (UID: \"f267352d-8e81-4980-a43d-6b003abfe54c\") " pod="kube-system/kube-proxy-b6kcr" May 17 00:23:02.977219 kubelet[2656]: I0517 00:23:02.976721 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfqj\" (UniqueName: \"kubernetes.io/projected/f267352d-8e81-4980-a43d-6b003abfe54c-kube-api-access-8gfqj\") pod \"kube-proxy-b6kcr\" (UID: \"f267352d-8e81-4980-a43d-6b003abfe54c\") " pod="kube-system/kube-proxy-b6kcr" May 17 00:23:03.177339 kubelet[2656]: I0517 00:23:03.177261 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6fe84457-4ef8-4891-adf6-cfac1e723c00-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-jlr6b\" (UID: \"6fe84457-4ef8-4891-adf6-cfac1e723c00\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jlr6b" May 17 00:23:03.177339 kubelet[2656]: I0517 00:23:03.177321 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gz7\" (UniqueName: \"kubernetes.io/projected/6fe84457-4ef8-4891-adf6-cfac1e723c00-kube-api-access-b2gz7\") pod \"tigera-operator-7c5755cdcb-jlr6b\" (UID: \"6fe84457-4ef8-4891-adf6-cfac1e723c00\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jlr6b" May 17 00:23:03.254940 kubelet[2656]: E0517 00:23:03.253205 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:03.255087 containerd[1593]: time="2025-05-17T00:23:03.254345127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6kcr,Uid:f267352d-8e81-4980-a43d-6b003abfe54c,Namespace:kube-system,Attempt:0,}" May 17 00:23:03.297103 containerd[1593]: time="2025-05-17T00:23:03.295909773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:03.297103 containerd[1593]: time="2025-05-17T00:23:03.295986556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:03.297103 containerd[1593]: time="2025-05-17T00:23:03.296003867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:03.297103 containerd[1593]: time="2025-05-17T00:23:03.296150247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:03.350324 containerd[1593]: time="2025-05-17T00:23:03.350276096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6kcr,Uid:f267352d-8e81-4980-a43d-6b003abfe54c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ef7278bfb815eb241e2ddabfb29cfdc75479a566f35269e7acfc9e39445849\"" May 17 00:23:03.351606 kubelet[2656]: E0517 00:23:03.351565 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:03.355313 containerd[1593]: time="2025-05-17T00:23:03.355254295Z" level=info msg="CreateContainer within sandbox \"d6ef7278bfb815eb241e2ddabfb29cfdc75479a566f35269e7acfc9e39445849\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:23:03.368345 containerd[1593]: time="2025-05-17T00:23:03.367680199Z" level=info msg="CreateContainer within sandbox \"d6ef7278bfb815eb241e2ddabfb29cfdc75479a566f35269e7acfc9e39445849\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3bd9dac6b46b47e6bb39a5254584061c225f9b70698ed176e741a0b3e079a70\"" May 17 00:23:03.368720 containerd[1593]: time="2025-05-17T00:23:03.368695841Z" level=info msg="StartContainer for \"a3bd9dac6b46b47e6bb39a5254584061c225f9b70698ed176e741a0b3e079a70\"" May 17 00:23:03.377011 containerd[1593]: time="2025-05-17T00:23:03.376955785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jlr6b,Uid:6fe84457-4ef8-4891-adf6-cfac1e723c00,Namespace:tigera-operator,Attempt:0,}" May 17 00:23:03.412089 containerd[1593]: time="2025-05-17T00:23:03.411788695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:03.412089 containerd[1593]: time="2025-05-17T00:23:03.411937980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:03.412089 containerd[1593]: time="2025-05-17T00:23:03.411954320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:03.413138 containerd[1593]: time="2025-05-17T00:23:03.413059794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:03.462491 containerd[1593]: time="2025-05-17T00:23:03.462392261Z" level=info msg="StartContainer for \"a3bd9dac6b46b47e6bb39a5254584061c225f9b70698ed176e741a0b3e079a70\" returns successfully" May 17 00:23:03.493085 containerd[1593]: time="2025-05-17T00:23:03.492294130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jlr6b,Uid:6fe84457-4ef8-4891-adf6-cfac1e723c00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7156123b9f7c97a1ae99ba486de2b0ec64d85542992bdaba26045e8f9549b5a3\"" May 17 00:23:03.498238 containerd[1593]: time="2025-05-17T00:23:03.497884088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:23:03.821470 kubelet[2656]: E0517 00:23:03.819174 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:05.219865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715299426.mount: Deactivated successfully. May 17 00:23:07.467099 kubelet[2656]: E0517 00:23:07.466003 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:07.483159 kubelet[2656]: I0517 00:23:07.482095 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b6kcr" podStartSLOduration=5.482075503 podStartE2EDuration="5.482075503s" podCreationTimestamp="2025-05-17 00:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:03.83492659 +0000 UTC m=+7.238763652" watchObservedRunningTime="2025-05-17 00:23:07.482075503 +0000 UTC m=+10.885912584" May 17 00:23:07.544282 containerd[1593]: time="2025-05-17T00:23:07.543076353Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:07.544282 containerd[1593]: time="2025-05-17T00:23:07.544099432Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:23:07.544282 containerd[1593]: time="2025-05-17T00:23:07.544202273Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:07.546363 containerd[1593]: time="2025-05-17T00:23:07.546317207Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:07.547605 containerd[1593]: time="2025-05-17T00:23:07.547566518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 4.049630578s" May 17 00:23:07.547605 containerd[1593]: time="2025-05-17T00:23:07.547603232Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:23:07.552269 containerd[1593]: time="2025-05-17T00:23:07.552087001Z" level=info msg="CreateContainer within sandbox \"7156123b9f7c97a1ae99ba486de2b0ec64d85542992bdaba26045e8f9549b5a3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:23:07.563139 containerd[1593]: time="2025-05-17T00:23:07.563099260Z" level=info msg="CreateContainer within sandbox \"7156123b9f7c97a1ae99ba486de2b0ec64d85542992bdaba26045e8f9549b5a3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d2e87f2b44ac71543cb821f7378007313ad1694f96f048893f0a9717ad8afaf3\"" May 17 00:23:07.565709 containerd[1593]: time="2025-05-17T00:23:07.565665372Z" level=info msg="StartContainer for \"d2e87f2b44ac71543cb821f7378007313ad1694f96f048893f0a9717ad8afaf3\"" May 17 00:23:07.643674 containerd[1593]: time="2025-05-17T00:23:07.643604714Z" level=info msg="StartContainer for \"d2e87f2b44ac71543cb821f7378007313ad1694f96f048893f0a9717ad8afaf3\" returns successfully" May 17 00:23:07.840843 kubelet[2656]: I0517 00:23:07.840594 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-jlr6b" podStartSLOduration=0.78814431 podStartE2EDuration="4.840566707s" podCreationTimestamp="2025-05-17 00:23:03 +0000 UTC" firstStartedPulling="2025-05-17 00:23:03.496919189 +0000 UTC m=+6.900756227" lastFinishedPulling="2025-05-17 00:23:07.549341586 +0000 UTC m=+10.953178624" observedRunningTime="2025-05-17 00:23:07.839920053 +0000 UTC m=+11.243757112" watchObservedRunningTime="2025-05-17 00:23:07.840566707 +0000 UTC m=+11.244403769" May 17 00:23:09.351196 kubelet[2656]: E0517 00:23:09.349166 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:09.441067 kubelet[2656]: E0517 00:23:09.440688 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:09.838460 kubelet[2656]: E0517 00:23:09.836455 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:14.547276 update_engine[1565]: I20250517 00:23:14.546480 1565 update_attempter.cc:509] Updating boot flags... May 17 00:23:14.728470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3037) May 17 00:23:14.777937 sudo[1788]: pam_unix(sudo:session): session closed for user root May 17 00:23:14.787626 sshd[1781]: pam_unix(sshd:session): session closed for user core May 17 00:23:14.811951 systemd[1]: sshd@6-134.199.214.88:22-139.178.68.195:39764.service: Deactivated successfully. May 17 00:23:14.817048 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:23:14.861845 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. May 17 00:23:14.876947 systemd-logind[1555]: Removed session 7. May 17 00:23:14.975622 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3035) May 17 00:23:19.801472 kubelet[2656]: I0517 00:23:19.801304 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/252954ee-a475-47f6-8d60-627d9ed3df54-tigera-ca-bundle\") pod \"calico-typha-c999fd6c8-bwjk9\" (UID: \"252954ee-a475-47f6-8d60-627d9ed3df54\") " pod="calico-system/calico-typha-c999fd6c8-bwjk9" May 17 00:23:19.801472 kubelet[2656]: I0517 00:23:19.801350 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qmc\" (UniqueName: \"kubernetes.io/projected/252954ee-a475-47f6-8d60-627d9ed3df54-kube-api-access-j2qmc\") pod \"calico-typha-c999fd6c8-bwjk9\" (UID: \"252954ee-a475-47f6-8d60-627d9ed3df54\") " pod="calico-system/calico-typha-c999fd6c8-bwjk9" May 17 00:23:19.801472 kubelet[2656]: I0517 00:23:19.801370 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/252954ee-a475-47f6-8d60-627d9ed3df54-typha-certs\") pod \"calico-typha-c999fd6c8-bwjk9\" (UID: \"252954ee-a475-47f6-8d60-627d9ed3df54\") " pod="calico-system/calico-typha-c999fd6c8-bwjk9" May 17 00:23:20.018238 kubelet[2656]: E0517 00:23:20.015676 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:20.022737 containerd[1593]: time="2025-05-17T00:23:20.022059599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c999fd6c8-bwjk9,Uid:252954ee-a475-47f6-8d60-627d9ed3df54,Namespace:calico-system,Attempt:0,}" May 17 00:23:20.067298 containerd[1593]: time="2025-05-17T00:23:20.066371500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:20.067298 containerd[1593]: time="2025-05-17T00:23:20.066585199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:20.067298 containerd[1593]: time="2025-05-17T00:23:20.066646825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:20.067298 containerd[1593]: time="2025-05-17T00:23:20.066898358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:20.182567 containerd[1593]: time="2025-05-17T00:23:20.181471524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c999fd6c8-bwjk9,Uid:252954ee-a475-47f6-8d60-627d9ed3df54,Namespace:calico-system,Attempt:0,} returns sandbox id \"28fd65229d05815b620aca10ab1660462d9743050683d7a8f9f254e4103425dd\"" May 17 00:23:20.184970 kubelet[2656]: E0517 00:23:20.184590 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:20.188461 containerd[1593]: time="2025-05-17T00:23:20.187842261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:23:20.204050 kubelet[2656]: I0517 00:23:20.203965 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c1061d-3326-4910-be83-01e5156e0bd4-tigera-ca-bundle\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204050 kubelet[2656]: I0517 00:23:20.204023 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-policysync\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204050 kubelet[2656]: I0517 00:23:20.204049 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/80c1061d-3326-4910-be83-01e5156e0bd4-node-certs\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204050 kubelet[2656]: I0517 00:23:20.204067 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-cni-log-dir\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204325 kubelet[2656]: I0517 00:23:20.204089 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-lib-modules\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204325 kubelet[2656]: I0517 00:23:20.204110 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-var-lib-calico\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204325 kubelet[2656]: I0517 00:23:20.204136 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-var-run-calico\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204325 kubelet[2656]: I0517 00:23:20.204156 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-flexvol-driver-host\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204325 kubelet[2656]: I0517 00:23:20.204173 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-cni-net-dir\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204578 kubelet[2656]: I0517 00:23:20.204189 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-xtables-lock\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204578 kubelet[2656]: I0517 00:23:20.204203 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ccc\" (UniqueName: \"kubernetes.io/projected/80c1061d-3326-4910-be83-01e5156e0bd4-kube-api-access-66ccc\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.204578 kubelet[2656]: I0517 00:23:20.204222 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/80c1061d-3326-4910-be83-01e5156e0bd4-cni-bin-dir\") pod \"calico-node-pkrzg\" (UID: \"80c1061d-3326-4910-be83-01e5156e0bd4\") " pod="calico-system/calico-node-pkrzg" May 17 00:23:20.321547 kubelet[2656]: E0517 00:23:20.320122 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.321547 kubelet[2656]: W0517 00:23:20.320185 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.321547 kubelet[2656]: E0517 00:23:20.320237 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.331400 kubelet[2656]: E0517 00:23:20.331351 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:20.331817 kubelet[2656]: E0517 00:23:20.331605 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.331817 kubelet[2656]: W0517 00:23:20.331629 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.331940 kubelet[2656]: E0517 00:23:20.331876 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.394482 kubelet[2656]: E0517 00:23:20.394138 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.394482 kubelet[2656]: W0517 00:23:20.394168 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.396677 kubelet[2656]: E0517 00:23:20.396252 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.398953 kubelet[2656]: E0517 00:23:20.397679 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.398953 kubelet[2656]: W0517 00:23:20.397867 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.398953 kubelet[2656]: E0517 00:23:20.397889 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.400200 kubelet[2656]: E0517 00:23:20.400170 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.400200 kubelet[2656]: W0517 00:23:20.400194 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.402581 kubelet[2656]: E0517 00:23:20.402517 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.402906 kubelet[2656]: E0517 00:23:20.402886 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.402968 kubelet[2656]: W0517 00:23:20.402924 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.402968 kubelet[2656]: E0517 00:23:20.402954 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.403343 kubelet[2656]: E0517 00:23:20.403320 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.403403 kubelet[2656]: W0517 00:23:20.403343 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.403403 kubelet[2656]: E0517 00:23:20.403362 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.405449 kubelet[2656]: E0517 00:23:20.404519 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.405449 kubelet[2656]: W0517 00:23:20.404536 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.405449 kubelet[2656]: E0517 00:23:20.404554 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.405630 kubelet[2656]: E0517 00:23:20.405613 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.405630 kubelet[2656]: W0517 00:23:20.405628 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.405698 kubelet[2656]: E0517 00:23:20.405643 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.407708 kubelet[2656]: E0517 00:23:20.407683 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.407708 kubelet[2656]: W0517 00:23:20.407700 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.407708 kubelet[2656]: E0517 00:23:20.407716 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.407982 kubelet[2656]: E0517 00:23:20.407966 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.408020 kubelet[2656]: W0517 00:23:20.407983 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.408020 kubelet[2656]: E0517 00:23:20.407998 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.410611 kubelet[2656]: E0517 00:23:20.410561 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.410611 kubelet[2656]: W0517 00:23:20.410609 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.410804 kubelet[2656]: E0517 00:23:20.410637 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.411466 kubelet[2656]: E0517 00:23:20.410978 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.411466 kubelet[2656]: W0517 00:23:20.410993 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.411466 kubelet[2656]: E0517 00:23:20.411009 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.411466 kubelet[2656]: E0517 00:23:20.411262 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.411466 kubelet[2656]: W0517 00:23:20.411272 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.411466 kubelet[2656]: E0517 00:23:20.411295 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.413432 kubelet[2656]: E0517 00:23:20.412583 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.413432 kubelet[2656]: W0517 00:23:20.412609 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.413432 kubelet[2656]: E0517 00:23:20.412628 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.413432 kubelet[2656]: E0517 00:23:20.413152 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.413432 kubelet[2656]: W0517 00:23:20.413163 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.413432 kubelet[2656]: E0517 00:23:20.413175 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.414026 kubelet[2656]: E0517 00:23:20.414004 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.414026 kubelet[2656]: W0517 00:23:20.414020 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.414131 kubelet[2656]: E0517 00:23:20.414033 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.415202 kubelet[2656]: E0517 00:23:20.415177 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.415202 kubelet[2656]: W0517 00:23:20.415194 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.415202 kubelet[2656]: E0517 00:23:20.415208 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.416160 kubelet[2656]: E0517 00:23:20.416136 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.416160 kubelet[2656]: W0517 00:23:20.416152 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.416160 kubelet[2656]: E0517 00:23:20.416166 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.417661 kubelet[2656]: E0517 00:23:20.417635 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.417661 kubelet[2656]: W0517 00:23:20.417655 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.417763 kubelet[2656]: E0517 00:23:20.417671 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.417909 kubelet[2656]: E0517 00:23:20.417897 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.417948 kubelet[2656]: W0517 00:23:20.417909 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.417948 kubelet[2656]: E0517 00:23:20.417919 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.418570 kubelet[2656]: E0517 00:23:20.418542 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.418570 kubelet[2656]: W0517 00:23:20.418562 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.418679 kubelet[2656]: E0517 00:23:20.418584 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.424014 containerd[1593]: time="2025-05-17T00:23:20.421539845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pkrzg,Uid:80c1061d-3326-4910-be83-01e5156e0bd4,Namespace:calico-system,Attempt:0,}" May 17 00:23:20.424200 kubelet[2656]: E0517 00:23:20.423720 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.424200 kubelet[2656]: W0517 00:23:20.423752 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.424200 kubelet[2656]: E0517 00:23:20.423785 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.424200 kubelet[2656]: I0517 00:23:20.423836 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7aa0df5-b560-4539-8078-1b99b64b6387-registration-dir\") pod \"csi-node-driver-mfjj7\" (UID: \"c7aa0df5-b560-4539-8078-1b99b64b6387\") " pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:20.425861 kubelet[2656]: E0517 00:23:20.425821 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.427535 kubelet[2656]: W0517 00:23:20.427479 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.429097 kubelet[2656]: E0517 00:23:20.427757 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.429097 kubelet[2656]: I0517 00:23:20.427806 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7aa0df5-b560-4539-8078-1b99b64b6387-socket-dir\") pod \"csi-node-driver-mfjj7\" (UID: \"c7aa0df5-b560-4539-8078-1b99b64b6387\") " pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:20.429097 kubelet[2656]: E0517 00:23:20.429007 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.429097 kubelet[2656]: W0517 00:23:20.429026 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.429097 kubelet[2656]: E0517 00:23:20.429060 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.431443 kubelet[2656]: E0517 00:23:20.431255 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.431443 kubelet[2656]: W0517 00:23:20.431279 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.431601 kubelet[2656]: E0517 00:23:20.431557 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.431601 kubelet[2656]: W0517 00:23:20.431566 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.433648 kubelet[2656]: E0517 00:23:20.431719 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.433648 kubelet[2656]: W0517 00:23:20.431732 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.433648 kubelet[2656]: E0517 00:23:20.431747 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.433648 kubelet[2656]: E0517 00:23:20.431760 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.433648 kubelet[2656]: E0517 00:23:20.433468 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.433648 kubelet[2656]: I0517 00:23:20.433524 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7aa0df5-b560-4539-8078-1b99b64b6387-varrun\") pod \"csi-node-driver-mfjj7\" (UID: \"c7aa0df5-b560-4539-8078-1b99b64b6387\") " pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:20.433648 kubelet[2656]: E0517 00:23:20.433651 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.434042 kubelet[2656]: W0517 00:23:20.433668 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.434042 kubelet[2656]: E0517 00:23:20.433685 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.434042 kubelet[2656]: E0517 00:23:20.433895 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.434042 kubelet[2656]: W0517 00:23:20.433904 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.434042 kubelet[2656]: E0517 00:23:20.433917 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.434182 kubelet[2656]: E0517 00:23:20.434099 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.434182 kubelet[2656]: W0517 00:23:20.434110 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.434182 kubelet[2656]: E0517 00:23:20.434121 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.435528 kubelet[2656]: E0517 00:23:20.435445 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.435528 kubelet[2656]: W0517 00:23:20.435467 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.435528 kubelet[2656]: E0517 00:23:20.435488 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.435528 kubelet[2656]: I0517 00:23:20.435523 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprjv\" (UniqueName: \"kubernetes.io/projected/c7aa0df5-b560-4539-8078-1b99b64b6387-kube-api-access-sprjv\") pod \"csi-node-driver-mfjj7\" (UID: \"c7aa0df5-b560-4539-8078-1b99b64b6387\") " pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:20.438675 kubelet[2656]: E0517 00:23:20.436358 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.438675 kubelet[2656]: W0517 00:23:20.436375 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.438675 kubelet[2656]: E0517 00:23:20.436449 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.438675 kubelet[2656]: I0517 00:23:20.436478 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7aa0df5-b560-4539-8078-1b99b64b6387-kubelet-dir\") pod \"csi-node-driver-mfjj7\" (UID: \"c7aa0df5-b560-4539-8078-1b99b64b6387\") " pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:20.439198 kubelet[2656]: E0517 00:23:20.439050 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.439198 kubelet[2656]: W0517 00:23:20.439078 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.439198 kubelet[2656]: E0517 00:23:20.439135 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.440480 kubelet[2656]: E0517 00:23:20.439341 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.440480 kubelet[2656]: W0517 00:23:20.439355 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.440480 kubelet[2656]: E0517 00:23:20.439375 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.445445 kubelet[2656]: E0517 00:23:20.441232 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.445445 kubelet[2656]: W0517 00:23:20.441255 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.445445 kubelet[2656]: E0517 00:23:20.441276 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.445445 kubelet[2656]: E0517 00:23:20.441504 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.445445 kubelet[2656]: W0517 00:23:20.441512 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.445445 kubelet[2656]: E0517 00:23:20.441522 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.489766 containerd[1593]: time="2025-05-17T00:23:20.489499060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:20.489766 containerd[1593]: time="2025-05-17T00:23:20.489567778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:20.489766 containerd[1593]: time="2025-05-17T00:23:20.489578812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:20.491594 containerd[1593]: time="2025-05-17T00:23:20.490283136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:20.541452 kubelet[2656]: E0517 00:23:20.539209 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.541668 kubelet[2656]: W0517 00:23:20.541510 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.541668 kubelet[2656]: E0517 00:23:20.541562 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.542853 kubelet[2656]: E0517 00:23:20.542820 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.542853 kubelet[2656]: W0517 00:23:20.542843 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.543040 kubelet[2656]: E0517 00:23:20.542871 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.545540 kubelet[2656]: E0517 00:23:20.545499 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.545540 kubelet[2656]: W0517 00:23:20.545532 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.546619 kubelet[2656]: E0517 00:23:20.546199 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.546619 kubelet[2656]: W0517 00:23:20.546229 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.546619 kubelet[2656]: E0517 00:23:20.546285 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.546619 kubelet[2656]: E0517 00:23:20.546385 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.547091 kubelet[2656]: E0517 00:23:20.547037 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.547091 kubelet[2656]: W0517 00:23:20.547063 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.547218 kubelet[2656]: E0517 00:23:20.547107 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.548089 kubelet[2656]: E0517 00:23:20.548051 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.548349 kubelet[2656]: W0517 00:23:20.548197 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.548349 kubelet[2656]: E0517 00:23:20.548228 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.549158 kubelet[2656]: E0517 00:23:20.548636 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.549158 kubelet[2656]: W0517 00:23:20.548650 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.549960 kubelet[2656]: E0517 00:23:20.549936 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.551340 kubelet[2656]: E0517 00:23:20.550977 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.551340 kubelet[2656]: W0517 00:23:20.550996 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.552935 kubelet[2656]: E0517 00:23:20.552012 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.552935 kubelet[2656]: E0517 00:23:20.552488 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.552935 kubelet[2656]: W0517 00:23:20.552502 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.553801 kubelet[2656]: E0517 00:23:20.553781 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.555082 kubelet[2656]: E0517 00:23:20.555064 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.555335 kubelet[2656]: W0517 00:23:20.555209 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.555335 kubelet[2656]: E0517 00:23:20.555273 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.556400 kubelet[2656]: E0517 00:23:20.556219 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.556400 kubelet[2656]: W0517 00:23:20.556240 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.556400 kubelet[2656]: E0517 00:23:20.556299 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.556854 kubelet[2656]: E0517 00:23:20.556803 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.556854 kubelet[2656]: W0517 00:23:20.556816 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.557041 kubelet[2656]: E0517 00:23:20.556854 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.557351 kubelet[2656]: E0517 00:23:20.557297 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.557351 kubelet[2656]: W0517 00:23:20.557310 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.557647 kubelet[2656]: E0517 00:23:20.557352 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.559092 kubelet[2656]: E0517 00:23:20.558591 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.559545 kubelet[2656]: W0517 00:23:20.559211 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.559545 kubelet[2656]: E0517 00:23:20.559270 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.560294 kubelet[2656]: E0517 00:23:20.560214 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.560294 kubelet[2656]: W0517 00:23:20.560231 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.560384 kubelet[2656]: E0517 00:23:20.560296 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.560739 kubelet[2656]: E0517 00:23:20.560723 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.561372 kubelet[2656]: W0517 00:23:20.561267 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.561372 kubelet[2656]: E0517 00:23:20.561318 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.561811 kubelet[2656]: E0517 00:23:20.561752 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.561811 kubelet[2656]: W0517 00:23:20.561764 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.561811 kubelet[2656]: E0517 00:23:20.561798 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.562281 kubelet[2656]: E0517 00:23:20.562269 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.562542 kubelet[2656]: W0517 00:23:20.562335 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.562542 kubelet[2656]: E0517 00:23:20.562476 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.563447 kubelet[2656]: E0517 00:23:20.562905 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.563447 kubelet[2656]: W0517 00:23:20.562920 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.563447 kubelet[2656]: E0517 00:23:20.563062 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.563955 kubelet[2656]: E0517 00:23:20.563846 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.563955 kubelet[2656]: W0517 00:23:20.563859 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.563955 kubelet[2656]: E0517 00:23:20.563913 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.564267 kubelet[2656]: E0517 00:23:20.564235 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.564267 kubelet[2656]: W0517 00:23:20.564247 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.564546 kubelet[2656]: E0517 00:23:20.564453 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.564720 kubelet[2656]: E0517 00:23:20.564698 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.565506 kubelet[2656]: W0517 00:23:20.564765 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.565572 kubelet[2656]: E0517 00:23:20.565522 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.567212 kubelet[2656]: E0517 00:23:20.566636 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.567212 kubelet[2656]: W0517 00:23:20.566655 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.567212 kubelet[2656]: E0517 00:23:20.566689 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.567993 kubelet[2656]: E0517 00:23:20.567974 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.567993 kubelet[2656]: W0517 00:23:20.567990 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.567993 kubelet[2656]: E0517 00:23:20.568012 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.569551 kubelet[2656]: E0517 00:23:20.569529 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.569551 kubelet[2656]: W0517 00:23:20.569545 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.569685 kubelet[2656]: E0517 00:23:20.569562 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.588843 kubelet[2656]: E0517 00:23:20.587750 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:20.588843 kubelet[2656]: W0517 00:23:20.587790 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:20.588843 kubelet[2656]: E0517 00:23:20.587818 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:20.602049 containerd[1593]: time="2025-05-17T00:23:20.602007742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pkrzg,Uid:80c1061d-3326-4910-be83-01e5156e0bd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\"" May 17 00:23:21.655177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817046805.mount: Deactivated successfully. May 17 00:23:21.742750 kubelet[2656]: E0517 00:23:21.738763 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:22.588365 containerd[1593]: time="2025-05-17T00:23:22.588290105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.589563 containerd[1593]: time="2025-05-17T00:23:22.589459213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:23:22.590043 containerd[1593]: time="2025-05-17T00:23:22.589874925Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.591867 containerd[1593]: time="2025-05-17T00:23:22.591816556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:22.593284 containerd[1593]: time="2025-05-17T00:23:22.593081731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.405197085s" May 17 00:23:22.593284 containerd[1593]: time="2025-05-17T00:23:22.593140033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:23:22.595982 containerd[1593]: time="2025-05-17T00:23:22.594955577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:23:22.621428 containerd[1593]: time="2025-05-17T00:23:22.621355862Z" level=info msg="CreateContainer within sandbox \"28fd65229d05815b620aca10ab1660462d9743050683d7a8f9f254e4103425dd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:23:22.631450 containerd[1593]: time="2025-05-17T00:23:22.631322414Z" level=info msg="CreateContainer within sandbox \"28fd65229d05815b620aca10ab1660462d9743050683d7a8f9f254e4103425dd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"97d7d2a0ce6fbc10755fd2ef08aa5308564bd548bc24d037c86ec04cf06e7964\"" May 17 00:23:22.633535 containerd[1593]: time="2025-05-17T00:23:22.633492185Z" level=info msg="StartContainer for \"97d7d2a0ce6fbc10755fd2ef08aa5308564bd548bc24d037c86ec04cf06e7964\"" May 17 00:23:22.817055 containerd[1593]: time="2025-05-17T00:23:22.816921978Z" level=info msg="StartContainer for \"97d7d2a0ce6fbc10755fd2ef08aa5308564bd548bc24d037c86ec04cf06e7964\" returns successfully" May 17 00:23:22.907540 kubelet[2656]: E0517 00:23:22.904058 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.944860 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.946450 kubelet[2656]: W0517 00:23:22.944893 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.944924 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.945310 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.946450 kubelet[2656]: W0517 00:23:22.945338 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.945358 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.945651 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.946450 kubelet[2656]: W0517 00:23:22.945663 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.945676 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.946450 kubelet[2656]: E0517 00:23:22.945973 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.947047 kubelet[2656]: W0517 00:23:22.945987 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.947047 kubelet[2656]: E0517 00:23:22.946001 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.948383 kubelet[2656]: E0517 00:23:22.947374 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.948383 kubelet[2656]: W0517 00:23:22.947400 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.948383 kubelet[2656]: E0517 00:23:22.947435 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.948653 kubelet[2656]: E0517 00:23:22.948618 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.948700 kubelet[2656]: W0517 00:23:22.948655 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.948700 kubelet[2656]: E0517 00:23:22.948677 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.954039 kubelet[2656]: E0517 00:23:22.953997 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.954039 kubelet[2656]: W0517 00:23:22.954033 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.954212 kubelet[2656]: E0517 00:23:22.954072 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.955817 kubelet[2656]: E0517 00:23:22.955575 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.955817 kubelet[2656]: W0517 00:23:22.955605 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.955817 kubelet[2656]: E0517 00:23:22.955635 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.958477 kubelet[2656]: E0517 00:23:22.957767 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.958477 kubelet[2656]: W0517 00:23:22.958172 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.958477 kubelet[2656]: E0517 00:23:22.958218 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.960058 kubelet[2656]: E0517 00:23:22.959936 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.961341 kubelet[2656]: W0517 00:23:22.959967 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.961341 kubelet[2656]: E0517 00:23:22.960882 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.963464 kubelet[2656]: E0517 00:23:22.962838 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.963464 kubelet[2656]: W0517 00:23:22.962866 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.963464 kubelet[2656]: E0517 00:23:22.962895 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.965445 kubelet[2656]: E0517 00:23:22.964905 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.965445 kubelet[2656]: W0517 00:23:22.964938 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.965445 kubelet[2656]: E0517 00:23:22.964969 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.966855 kubelet[2656]: E0517 00:23:22.966766 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.966855 kubelet[2656]: W0517 00:23:22.966800 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.966855 kubelet[2656]: E0517 00:23:22.966831 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.969446 kubelet[2656]: E0517 00:23:22.968619 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.969446 kubelet[2656]: W0517 00:23:22.968650 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.969446 kubelet[2656]: E0517 00:23:22.968682 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.970874 kubelet[2656]: E0517 00:23:22.970829 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.970874 kubelet[2656]: W0517 00:23:22.970862 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.971083 kubelet[2656]: E0517 00:23:22.970897 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.974278 kubelet[2656]: E0517 00:23:22.974136 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.975789 kubelet[2656]: W0517 00:23:22.974177 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.975789 kubelet[2656]: E0517 00:23:22.974848 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.975789 kubelet[2656]: E0517 00:23:22.975373 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.975789 kubelet[2656]: W0517 00:23:22.975393 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.975789 kubelet[2656]: E0517 00:23:22.975438 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.977749 kubelet[2656]: E0517 00:23:22.977549 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.977749 kubelet[2656]: W0517 00:23:22.977578 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.977749 kubelet[2656]: E0517 00:23:22.977616 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.980569 kubelet[2656]: E0517 00:23:22.980525 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.980569 kubelet[2656]: W0517 00:23:22.980561 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.981225 kubelet[2656]: E0517 00:23:22.980798 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.984175 kubelet[2656]: E0517 00:23:22.983581 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.984175 kubelet[2656]: W0517 00:23:22.983621 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.984518 kubelet[2656]: E0517 00:23:22.984443 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.986447 kubelet[2656]: E0517 00:23:22.986105 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.986447 kubelet[2656]: W0517 00:23:22.986156 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.987318 kubelet[2656]: E0517 00:23:22.987178 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.987318 kubelet[2656]: W0517 00:23:22.987199 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.987803 kubelet[2656]: E0517 00:23:22.987581 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.987803 kubelet[2656]: E0517 00:23:22.987623 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.991497 kubelet[2656]: E0517 00:23:22.988490 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.991864 kubelet[2656]: W0517 00:23:22.991509 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.991864 kubelet[2656]: E0517 00:23:22.991582 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.995449 kubelet[2656]: E0517 00:23:22.994457 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.995449 kubelet[2656]: W0517 00:23:22.994505 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:22.997613 kubelet[2656]: E0517 00:23:22.997564 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:22.999132 kubelet[2656]: E0517 00:23:22.998686 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:22.999132 kubelet[2656]: W0517 00:23:22.998722 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.000658 kubelet[2656]: E0517 00:23:23.000468 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.000807 kubelet[2656]: E0517 00:23:23.000778 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.000871 kubelet[2656]: W0517 00:23:23.000807 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.000940 kubelet[2656]: E0517 00:23:23.000923 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.001973 kubelet[2656]: E0517 00:23:23.001178 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.001973 kubelet[2656]: W0517 00:23:23.001197 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.001973 kubelet[2656]: E0517 00:23:23.001635 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.003863 kubelet[2656]: E0517 00:23:23.003815 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.003863 kubelet[2656]: W0517 00:23:23.003848 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.006410 kubelet[2656]: E0517 00:23:23.006204 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.006410 kubelet[2656]: W0517 00:23:23.006236 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.006410 kubelet[2656]: E0517 00:23:23.006443 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.006410 kubelet[2656]: E0517 00:23:23.006498 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.008609 kubelet[2656]: E0517 00:23:23.008566 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.008609 kubelet[2656]: W0517 00:23:23.008603 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.008780 kubelet[2656]: E0517 00:23:23.008644 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.010341 kubelet[2656]: E0517 00:23:23.010310 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.010341 kubelet[2656]: W0517 00:23:23.010337 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.010561 kubelet[2656]: E0517 00:23:23.010371 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.013834 kubelet[2656]: E0517 00:23:23.013617 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.013834 kubelet[2656]: W0517 00:23:23.013648 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.015268 kubelet[2656]: E0517 00:23:23.014483 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.015552 kubelet[2656]: E0517 00:23:23.015471 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.015552 kubelet[2656]: W0517 00:23:23.015492 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.015552 kubelet[2656]: E0517 00:23:23.015515 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.610108 systemd[1]: run-containerd-runc-k8s.io-97d7d2a0ce6fbc10755fd2ef08aa5308564bd548bc24d037c86ec04cf06e7964-runc.KR1EDP.mount: Deactivated successfully. May 17 00:23:23.738758 kubelet[2656]: E0517 00:23:23.738666 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:23.907381 kubelet[2656]: I0517 00:23:23.907050 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:23.908658 kubelet[2656]: E0517 00:23:23.908633 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:23.981565 kubelet[2656]: E0517 00:23:23.980939 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.981565 kubelet[2656]: W0517 00:23:23.981267 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.981565 kubelet[2656]: E0517 00:23:23.981349 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.982454 kubelet[2656]: E0517 00:23:23.982357 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.982454 kubelet[2656]: W0517 00:23:23.982379 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.982648 kubelet[2656]: E0517 00:23:23.982605 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.984176 kubelet[2656]: E0517 00:23:23.984120 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.984176 kubelet[2656]: W0517 00:23:23.984160 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.984176 kubelet[2656]: E0517 00:23:23.984188 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.984724 kubelet[2656]: E0517 00:23:23.984701 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.984724 kubelet[2656]: W0517 00:23:23.984721 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.984835 kubelet[2656]: E0517 00:23:23.984740 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.985698 kubelet[2656]: E0517 00:23:23.985520 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.985698 kubelet[2656]: W0517 00:23:23.985541 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.985698 kubelet[2656]: E0517 00:23:23.985563 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.987815 kubelet[2656]: E0517 00:23:23.987574 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.987815 kubelet[2656]: W0517 00:23:23.987601 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.987815 kubelet[2656]: E0517 00:23:23.987625 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.988971 kubelet[2656]: E0517 00:23:23.988687 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.988971 kubelet[2656]: W0517 00:23:23.988886 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.988971 kubelet[2656]: E0517 00:23:23.988915 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.991481 kubelet[2656]: E0517 00:23:23.991177 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.991481 kubelet[2656]: W0517 00:23:23.991206 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.991481 kubelet[2656]: E0517 00:23:23.991238 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.993274 kubelet[2656]: E0517 00:23:23.993056 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.993274 kubelet[2656]: W0517 00:23:23.993081 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.993274 kubelet[2656]: E0517 00:23:23.993118 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.993554 kubelet[2656]: E0517 00:23:23.993540 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.993666 kubelet[2656]: W0517 00:23:23.993650 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.993727 kubelet[2656]: E0517 00:23:23.993717 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.994065 kubelet[2656]: E0517 00:23:23.994050 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.994315 kubelet[2656]: W0517 00:23:23.994147 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.994315 kubelet[2656]: E0517 00:23:23.994168 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.994618 kubelet[2656]: E0517 00:23:23.994598 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.994733 kubelet[2656]: W0517 00:23:23.994714 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.994904 kubelet[2656]: E0517 00:23:23.994888 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.995438 kubelet[2656]: E0517 00:23:23.995277 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.995438 kubelet[2656]: W0517 00:23:23.995294 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.995438 kubelet[2656]: E0517 00:23:23.995316 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.995848 kubelet[2656]: E0517 00:23:23.995723 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.995848 kubelet[2656]: W0517 00:23:23.995736 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.995848 kubelet[2656]: E0517 00:23:23.995749 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.996089 kubelet[2656]: E0517 00:23:23.996073 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.996411 kubelet[2656]: W0517 00:23:23.996182 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.996411 kubelet[2656]: E0517 00:23:23.996209 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.996929 kubelet[2656]: E0517 00:23:23.996912 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.997244 kubelet[2656]: W0517 00:23:23.997077 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.997244 kubelet[2656]: E0517 00:23:23.997102 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.997730 kubelet[2656]: E0517 00:23:23.997715 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.997940 kubelet[2656]: W0517 00:23:23.997835 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.997940 kubelet[2656]: E0517 00:23:23.997857 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.998456 kubelet[2656]: E0517 00:23:23.998411 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.998456 kubelet[2656]: W0517 00:23:23.998440 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.998723 kubelet[2656]: E0517 00:23:23.998704 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.999138 kubelet[2656]: E0517 00:23:23.999117 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.999138 kubelet[2656]: W0517 00:23:23.999133 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.999288 kubelet[2656]: E0517 00:23:23.999151 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:23.999733 kubelet[2656]: E0517 00:23:23.999700 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:23.999733 kubelet[2656]: W0517 00:23:23.999716 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:23.999921 kubelet[2656]: E0517 00:23:23.999873 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.000292 kubelet[2656]: E0517 00:23:24.000276 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.000292 kubelet[2656]: W0517 00:23:24.000289 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.000393 kubelet[2656]: E0517 00:23:24.000372 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.001355 kubelet[2656]: E0517 00:23:24.001236 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.001355 kubelet[2656]: W0517 00:23:24.001258 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.001355 kubelet[2656]: E0517 00:23:24.001305 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.002229 kubelet[2656]: E0517 00:23:24.002207 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.002229 kubelet[2656]: W0517 00:23:24.002227 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.002883 kubelet[2656]: E0517 00:23:24.002349 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.003235 kubelet[2656]: E0517 00:23:24.003208 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.003235 kubelet[2656]: W0517 00:23:24.003225 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.003409 kubelet[2656]: E0517 00:23:24.003334 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.003612 kubelet[2656]: E0517 00:23:24.003567 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.003612 kubelet[2656]: W0517 00:23:24.003577 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.003776 kubelet[2656]: E0517 00:23:24.003664 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.003995 kubelet[2656]: E0517 00:23:24.003969 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.003995 kubelet[2656]: W0517 00:23:24.003986 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.004171 kubelet[2656]: E0517 00:23:24.004005 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.004467 kubelet[2656]: E0517 00:23:24.004317 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.004467 kubelet[2656]: W0517 00:23:24.004334 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.004467 kubelet[2656]: E0517 00:23:24.004352 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.005081 kubelet[2656]: E0517 00:23:24.004968 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.005081 kubelet[2656]: W0517 00:23:24.004985 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.005081 kubelet[2656]: E0517 00:23:24.005020 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.005237 kubelet[2656]: E0517 00:23:24.005227 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.005283 kubelet[2656]: W0517 00:23:24.005275 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.005345 kubelet[2656]: E0517 00:23:24.005335 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.005872 kubelet[2656]: E0517 00:23:24.005849 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.005872 kubelet[2656]: W0517 00:23:24.005865 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.005978 kubelet[2656]: E0517 00:23:24.005885 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.008634 kubelet[2656]: E0517 00:23:24.007606 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.008634 kubelet[2656]: W0517 00:23:24.007628 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.008634 kubelet[2656]: E0517 00:23:24.007743 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.008634 kubelet[2656]: E0517 00:23:24.008276 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.008634 kubelet[2656]: W0517 00:23:24.008290 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.008634 kubelet[2656]: E0517 00:23:24.008304 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.008885 kubelet[2656]: E0517 00:23:24.008713 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:24.008885 kubelet[2656]: W0517 00:23:24.008724 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:24.008885 kubelet[2656]: E0517 00:23:24.008735 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:24.016192 containerd[1593]: time="2025-05-17T00:23:24.016124110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.017621 containerd[1593]: time="2025-05-17T00:23:24.017555748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:23:24.018109 containerd[1593]: time="2025-05-17T00:23:24.018031915Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.020387 containerd[1593]: time="2025-05-17T00:23:24.020308571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.021701 containerd[1593]: time="2025-05-17T00:23:24.021021890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.426023743s" May 17 00:23:24.021701 containerd[1593]: time="2025-05-17T00:23:24.021062430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:23:24.027385 containerd[1593]: time="2025-05-17T00:23:24.027333676Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:23:24.065218 containerd[1593]: time="2025-05-17T00:23:24.065026889Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21\"" May 17 00:23:24.066358 containerd[1593]: time="2025-05-17T00:23:24.065910834Z" level=info msg="StartContainer for \"d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21\"" May 17 00:23:24.165581 containerd[1593]: time="2025-05-17T00:23:24.165393035Z" level=info msg="StartContainer for \"d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21\" returns successfully" May 17 00:23:24.228847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21-rootfs.mount: Deactivated successfully. May 17 00:23:24.255566 containerd[1593]: time="2025-05-17T00:23:24.228518291Z" level=info msg="shim disconnected" id=d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21 namespace=k8s.io May 17 00:23:24.255799 containerd[1593]: time="2025-05-17T00:23:24.255775025Z" level=warning msg="cleaning up after shim disconnected" id=d8ffa369f251caf1580450a3a8f13154c25efe9c796645a0b34246c6083e3a21 namespace=k8s.io May 17 00:23:24.255852 containerd[1593]: time="2025-05-17T00:23:24.255842608Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:24.278457 containerd[1593]: time="2025-05-17T00:23:24.277129813Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:23:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:23:24.915822 containerd[1593]: time="2025-05-17T00:23:24.914836811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:23:24.933384 kubelet[2656]: I0517 00:23:24.932532 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c999fd6c8-bwjk9" podStartSLOduration=3.5243852369999997 podStartE2EDuration="5.932510485s" podCreationTimestamp="2025-05-17 00:23:19 +0000 UTC" firstStartedPulling="2025-05-17 00:23:20.186507707 +0000 UTC m=+23.590344748" lastFinishedPulling="2025-05-17 00:23:22.594632943 +0000 UTC m=+25.998469996" observedRunningTime="2025-05-17 00:23:22.961648177 +0000 UTC m=+26.365485242" watchObservedRunningTime="2025-05-17 00:23:24.932510485 +0000 UTC m=+28.336347541" May 17 00:23:25.738792 kubelet[2656]: E0517 00:23:25.738720 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:27.739297 kubelet[2656]: E0517 00:23:27.739173 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:27.958648 containerd[1593]: time="2025-05-17T00:23:27.957442653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.958648 containerd[1593]: time="2025-05-17T00:23:27.958306071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:23:27.959520 containerd[1593]: time="2025-05-17T00:23:27.959476949Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.962175 containerd[1593]: time="2025-05-17T00:23:27.962128854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.963557 containerd[1593]: time="2025-05-17T00:23:27.963515190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.047792316s" May 17 00:23:27.963557 containerd[1593]: time="2025-05-17T00:23:27.963556899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:23:27.969428 containerd[1593]: time="2025-05-17T00:23:27.969316238Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:23:27.990338 containerd[1593]: time="2025-05-17T00:23:27.990199251Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52\"" May 17 00:23:27.992512 containerd[1593]: time="2025-05-17T00:23:27.991175608Z" level=info msg="StartContainer for \"42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52\"" May 17 00:23:28.113080 containerd[1593]: time="2025-05-17T00:23:28.112360666Z" level=info msg="StartContainer for \"42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52\" returns successfully" May 17 00:23:28.772760 containerd[1593]: time="2025-05-17T00:23:28.772696911Z" level=info msg="shim disconnected" id=42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52 namespace=k8s.io May 17 00:23:28.773177 containerd[1593]: time="2025-05-17T00:23:28.772991254Z" level=warning msg="cleaning up after shim disconnected" id=42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52 namespace=k8s.io May 17 00:23:28.773177 containerd[1593]: time="2025-05-17T00:23:28.773009896Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:28.773514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ce76debf050d561a94d2909e7d76144bcb66eb387fa8f865492d67be6d5a52-rootfs.mount: Deactivated successfully. May 17 00:23:28.845373 kubelet[2656]: I0517 00:23:28.844046 2656 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:23:28.943178 containerd[1593]: time="2025-05-17T00:23:28.943036401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:23:29.035584 kubelet[2656]: I0517 00:23:29.035367 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-backend-key-pair\") pod \"whisker-85ddd9d5d8-dbs7j\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " pod="calico-system/whisker-85ddd9d5d8-dbs7j" May 17 00:23:29.037120 kubelet[2656]: I0517 00:23:29.037084 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggqr\" (UniqueName: \"kubernetes.io/projected/f6d3fda2-c7fd-4936-b8bb-491f8f0ede83-kube-api-access-qggqr\") pod \"calico-apiserver-667c778c59-56rjm\" (UID: \"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83\") " pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" May 17 00:23:29.037431 kubelet[2656]: I0517 00:23:29.037299 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec01702b-c063-4f68-ba46-afbe1753b0e5-tigera-ca-bundle\") pod \"calico-kube-controllers-6b44dc845b-2vb57\" (UID: \"ec01702b-c063-4f68-ba46-afbe1753b0e5\") " pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" May 17 00:23:29.037431 kubelet[2656]: I0517 00:23:29.037346 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7rmj\" (UniqueName: \"kubernetes.io/projected/7ea02198-dcec-4a4d-8c1a-9550aa04b601-kube-api-access-s7rmj\") pod \"whisker-85ddd9d5d8-dbs7j\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " pod="calico-system/whisker-85ddd9d5d8-dbs7j" May 17 00:23:29.037431 kubelet[2656]: I0517 00:23:29.037373 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hwc\" (UniqueName: \"kubernetes.io/projected/1b6836be-a527-42a6-b488-c24f1f3b7b87-kube-api-access-75hwc\") pod \"calico-apiserver-667c778c59-kg2wm\" (UID: \"1b6836be-a527-42a6-b488-c24f1f3b7b87\") " pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" May 17 00:23:29.037791 kubelet[2656]: I0517 00:23:29.037398 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjkz\" (UniqueName: \"kubernetes.io/projected/ec01702b-c063-4f68-ba46-afbe1753b0e5-kube-api-access-qpjkz\") pod \"calico-kube-controllers-6b44dc845b-2vb57\" (UID: \"ec01702b-c063-4f68-ba46-afbe1753b0e5\") " pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" May 17 00:23:29.037791 kubelet[2656]: I0517 00:23:29.037609 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqx77\" (UniqueName: \"kubernetes.io/projected/908046ab-b728-4b75-9998-2e33cadd94e3-kube-api-access-zqx77\") pod \"coredns-7c65d6cfc9-rlk2g\" (UID: \"908046ab-b728-4b75-9998-2e33cadd94e3\") " pod="kube-system/coredns-7c65d6cfc9-rlk2g" May 17 00:23:29.037791 kubelet[2656]: I0517 00:23:29.037637 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b005f59-212f-4f5e-ba82-e64c93f912f7-config\") pod \"goldmane-8f77d7b6c-84rn9\" (UID: \"7b005f59-212f-4f5e-ba82-e64c93f912f7\") " pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.038057 kubelet[2656]: I0517 00:23:29.037912 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b005f59-212f-4f5e-ba82-e64c93f912f7-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-84rn9\" (UID: \"7b005f59-212f-4f5e-ba82-e64c93f912f7\") " pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.039182 kubelet[2656]: I0517 00:23:29.038465 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6d3fda2-c7fd-4936-b8bb-491f8f0ede83-calico-apiserver-certs\") pod \"calico-apiserver-667c778c59-56rjm\" (UID: \"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83\") " pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" May 17 00:23:29.039182 kubelet[2656]: I0517 00:23:29.039052 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cz82\" (UniqueName: \"kubernetes.io/projected/7b005f59-212f-4f5e-ba82-e64c93f912f7-kube-api-access-5cz82\") pod \"goldmane-8f77d7b6c-84rn9\" (UID: \"7b005f59-212f-4f5e-ba82-e64c93f912f7\") " pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.039182 kubelet[2656]: I0517 00:23:29.039081 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908046ab-b728-4b75-9998-2e33cadd94e3-config-volume\") pod \"coredns-7c65d6cfc9-rlk2g\" (UID: \"908046ab-b728-4b75-9998-2e33cadd94e3\") " pod="kube-system/coredns-7c65d6cfc9-rlk2g" May 17 00:23:29.039182 kubelet[2656]: I0517 00:23:29.039140 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c131439b-80e0-49bc-a36e-7509ece2f8e2-config-volume\") pod \"coredns-7c65d6cfc9-bk5wd\" (UID: \"c131439b-80e0-49bc-a36e-7509ece2f8e2\") " pod="kube-system/coredns-7c65d6cfc9-bk5wd" May 17 00:23:29.039938 kubelet[2656]: I0517 00:23:29.039165 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b005f59-212f-4f5e-ba82-e64c93f912f7-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-84rn9\" (UID: \"7b005f59-212f-4f5e-ba82-e64c93f912f7\") " pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.039938 kubelet[2656]: I0517 00:23:29.039351 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b6836be-a527-42a6-b488-c24f1f3b7b87-calico-apiserver-certs\") pod \"calico-apiserver-667c778c59-kg2wm\" (UID: \"1b6836be-a527-42a6-b488-c24f1f3b7b87\") " pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" May 17 00:23:29.039938 kubelet[2656]: I0517 00:23:29.039385 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2qr\" (UniqueName: \"kubernetes.io/projected/c131439b-80e0-49bc-a36e-7509ece2f8e2-kube-api-access-jz2qr\") pod \"coredns-7c65d6cfc9-bk5wd\" (UID: \"c131439b-80e0-49bc-a36e-7509ece2f8e2\") " pod="kube-system/coredns-7c65d6cfc9-bk5wd" May 17 00:23:29.039938 kubelet[2656]: I0517 00:23:29.039411 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-ca-bundle\") pod \"whisker-85ddd9d5d8-dbs7j\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " pod="calico-system/whisker-85ddd9d5d8-dbs7j" May 17 00:23:29.199541 containerd[1593]: time="2025-05-17T00:23:29.197077605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-84rn9,Uid:7b005f59-212f-4f5e-ba82-e64c93f912f7,Namespace:calico-system,Attempt:0,}" May 17 00:23:29.224116 kubelet[2656]: E0517 00:23:29.221338 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:29.225180 containerd[1593]: time="2025-05-17T00:23:29.225123428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5wd,Uid:c131439b-80e0-49bc-a36e-7509ece2f8e2,Namespace:kube-system,Attempt:0,}" May 17 00:23:29.491390 containerd[1593]: time="2025-05-17T00:23:29.491342102Z" level=error msg="Failed to destroy network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.493087 containerd[1593]: time="2025-05-17T00:23:29.492297948Z" level=error msg="Failed to destroy network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.496569 containerd[1593]: time="2025-05-17T00:23:29.496497987Z" level=error msg="encountered an error cleaning up failed sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.497824 containerd[1593]: time="2025-05-17T00:23:29.496505764Z" level=error msg="encountered an error cleaning up failed sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.504490 containerd[1593]: time="2025-05-17T00:23:29.504069780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5wd,Uid:c131439b-80e0-49bc-a36e-7509ece2f8e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.516908 containerd[1593]: time="2025-05-17T00:23:29.516671596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-kg2wm,Uid:1b6836be-a527-42a6-b488-c24f1f3b7b87,Namespace:calico-apiserver,Attempt:0,}" May 17 00:23:29.523526 kubelet[2656]: E0517 00:23:29.522909 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:29.524588 containerd[1593]: time="2025-05-17T00:23:29.524019945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rlk2g,Uid:908046ab-b728-4b75-9998-2e33cadd94e3,Namespace:kube-system,Attempt:0,}" May 17 00:23:29.526457 kubelet[2656]: E0517 00:23:29.525326 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.526716 containerd[1593]: time="2025-05-17T00:23:29.525730341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-56rjm,Uid:f6d3fda2-c7fd-4936-b8bb-491f8f0ede83,Namespace:calico-apiserver,Attempt:0,}" May 17 00:23:29.526795 containerd[1593]: time="2025-05-17T00:23:29.526706759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-84rn9,Uid:7b005f59-212f-4f5e-ba82-e64c93f912f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.529282 kubelet[2656]: E0517 00:23:29.528911 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.529282 kubelet[2656]: E0517 00:23:29.529149 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk5wd" May 17 00:23:29.529282 kubelet[2656]: E0517 00:23:29.529203 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk5wd" May 17 00:23:29.529654 kubelet[2656]: E0517 00:23:29.529256 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bk5wd_kube-system(c131439b-80e0-49bc-a36e-7509ece2f8e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bk5wd_kube-system(c131439b-80e0-49bc-a36e-7509ece2f8e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bk5wd" podUID="c131439b-80e0-49bc-a36e-7509ece2f8e2" May 17 00:23:29.529654 kubelet[2656]: E0517 00:23:29.529306 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.529654 kubelet[2656]: E0517 00:23:29.529324 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-84rn9" May 17 00:23:29.529881 kubelet[2656]: E0517 00:23:29.529352 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-84rn9_calico-system(7b005f59-212f-4f5e-ba82-e64c93f912f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-84rn9_calico-system(7b005f59-212f-4f5e-ba82-e64c93f912f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:29.532239 containerd[1593]: time="2025-05-17T00:23:29.531620149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85ddd9d5d8-dbs7j,Uid:7ea02198-dcec-4a4d-8c1a-9550aa04b601,Namespace:calico-system,Attempt:0,}" May 17 00:23:29.542459 containerd[1593]: time="2025-05-17T00:23:29.542091998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b44dc845b-2vb57,Uid:ec01702b-c063-4f68-ba46-afbe1753b0e5,Namespace:calico-system,Attempt:0,}" May 17 00:23:29.751287 containerd[1593]: time="2025-05-17T00:23:29.751052307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfjj7,Uid:c7aa0df5-b560-4539-8078-1b99b64b6387,Namespace:calico-system,Attempt:0,}" May 17 00:23:29.776536 containerd[1593]: time="2025-05-17T00:23:29.776215522Z" level=error msg="Failed to destroy network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.777178 containerd[1593]: time="2025-05-17T00:23:29.776967259Z" level=error msg="encountered an error cleaning up failed sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.777178 containerd[1593]: time="2025-05-17T00:23:29.777026047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b44dc845b-2vb57,Uid:ec01702b-c063-4f68-ba46-afbe1753b0e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.777397 kubelet[2656]: E0517 00:23:29.777348 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.777523 kubelet[2656]: E0517 00:23:29.777440 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" May 17 00:23:29.777523 kubelet[2656]: E0517 00:23:29.777472 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" May 17 00:23:29.780843 kubelet[2656]: E0517 00:23:29.777638 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b44dc845b-2vb57_calico-system(ec01702b-c063-4f68-ba46-afbe1753b0e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b44dc845b-2vb57_calico-system(ec01702b-c063-4f68-ba46-afbe1753b0e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" podUID="ec01702b-c063-4f68-ba46-afbe1753b0e5" May 17 00:23:29.854942 containerd[1593]: time="2025-05-17T00:23:29.854882993Z" level=error msg="Failed to destroy network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.855593 containerd[1593]: time="2025-05-17T00:23:29.855516634Z" level=error msg="encountered an error cleaning up failed sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.855593 containerd[1593]: time="2025-05-17T00:23:29.855570625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-56rjm,Uid:f6d3fda2-c7fd-4936-b8bb-491f8f0ede83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.855927 kubelet[2656]: E0517 00:23:29.855810 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.855927 kubelet[2656]: E0517 00:23:29.855880 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" May 17 00:23:29.856484 kubelet[2656]: E0517 00:23:29.855901 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" May 17 00:23:29.856484 kubelet[2656]: E0517 00:23:29.855973 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667c778c59-56rjm_calico-apiserver(f6d3fda2-c7fd-4936-b8bb-491f8f0ede83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667c778c59-56rjm_calico-apiserver(f6d3fda2-c7fd-4936-b8bb-491f8f0ede83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" podUID="f6d3fda2-c7fd-4936-b8bb-491f8f0ede83" May 17 00:23:29.859855 containerd[1593]: time="2025-05-17T00:23:29.859264175Z" level=error msg="Failed to destroy network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.859989 containerd[1593]: time="2025-05-17T00:23:29.859809123Z" level=error msg="encountered an error cleaning up failed sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.859989 containerd[1593]: time="2025-05-17T00:23:29.859959162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85ddd9d5d8-dbs7j,Uid:7ea02198-dcec-4a4d-8c1a-9550aa04b601,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.861524 kubelet[2656]: E0517 00:23:29.861396 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.861524 kubelet[2656]: E0517 00:23:29.861462 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85ddd9d5d8-dbs7j" May 17 00:23:29.861524 kubelet[2656]: E0517 00:23:29.861481 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85ddd9d5d8-dbs7j" May 17 00:23:29.861871 kubelet[2656]: E0517 00:23:29.861522 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85ddd9d5d8-dbs7j_calico-system(7ea02198-dcec-4a4d-8c1a-9550aa04b601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85ddd9d5d8-dbs7j_calico-system(7ea02198-dcec-4a4d-8c1a-9550aa04b601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85ddd9d5d8-dbs7j" podUID="7ea02198-dcec-4a4d-8c1a-9550aa04b601" May 17 00:23:29.869098 containerd[1593]: time="2025-05-17T00:23:29.868677397Z" level=error msg="Failed to destroy network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.870856 containerd[1593]: time="2025-05-17T00:23:29.870702492Z" level=error msg="encountered an error cleaning up failed sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.871441 containerd[1593]: time="2025-05-17T00:23:29.870903407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-kg2wm,Uid:1b6836be-a527-42a6-b488-c24f1f3b7b87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.871577 kubelet[2656]: E0517 00:23:29.871435 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.871577 kubelet[2656]: E0517 00:23:29.871527 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" May 17 00:23:29.871577 kubelet[2656]: E0517 00:23:29.871563 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" May 17 00:23:29.871679 kubelet[2656]: E0517 00:23:29.871612 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667c778c59-kg2wm_calico-apiserver(1b6836be-a527-42a6-b488-c24f1f3b7b87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667c778c59-kg2wm_calico-apiserver(1b6836be-a527-42a6-b488-c24f1f3b7b87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" podUID="1b6836be-a527-42a6-b488-c24f1f3b7b87" May 17 00:23:29.904203 containerd[1593]: time="2025-05-17T00:23:29.903437368Z" level=error msg="Failed to destroy network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.904203 containerd[1593]: time="2025-05-17T00:23:29.903856853Z" level=error msg="encountered an error cleaning up failed sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.904203 containerd[1593]: time="2025-05-17T00:23:29.903945554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rlk2g,Uid:908046ab-b728-4b75-9998-2e33cadd94e3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.904439 kubelet[2656]: E0517 00:23:29.904256 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.904439 kubelet[2656]: E0517 00:23:29.904326 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rlk2g" May 17 00:23:29.904439 kubelet[2656]: E0517 00:23:29.904350 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rlk2g" May 17 00:23:29.904550 kubelet[2656]: E0517 00:23:29.904392 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rlk2g_kube-system(908046ab-b728-4b75-9998-2e33cadd94e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rlk2g_kube-system(908046ab-b728-4b75-9998-2e33cadd94e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rlk2g" podUID="908046ab-b728-4b75-9998-2e33cadd94e3" May 17 00:23:29.918114 containerd[1593]: time="2025-05-17T00:23:29.918062598Z" level=error msg="Failed to destroy network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.918568 containerd[1593]: time="2025-05-17T00:23:29.918438772Z" level=error msg="encountered an error cleaning up failed sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.918648 containerd[1593]: time="2025-05-17T00:23:29.918602466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfjj7,Uid:c7aa0df5-b560-4539-8078-1b99b64b6387,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.918983 kubelet[2656]: E0517 00:23:29.918944 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:29.919036 kubelet[2656]: E0517 00:23:29.919014 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:29.919067 kubelet[2656]: E0517 00:23:29.919035 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mfjj7" May 17 00:23:29.919145 kubelet[2656]: E0517 00:23:29.919098 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mfjj7_calico-system(c7aa0df5-b560-4539-8078-1b99b64b6387)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mfjj7_calico-system(c7aa0df5-b560-4539-8078-1b99b64b6387)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:29.949439 kubelet[2656]: I0517 00:23:29.949320 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:29.953464 kubelet[2656]: I0517 00:23:29.952596 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:29.958885 containerd[1593]: time="2025-05-17T00:23:29.957447770Z" level=info msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" May 17 00:23:29.959429 containerd[1593]: time="2025-05-17T00:23:29.959382153Z" level=info msg="Ensure that sandbox cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f in task-service has been cleanup successfully" May 17 00:23:29.959617 containerd[1593]: time="2025-05-17T00:23:29.959588842Z" level=info msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" May 17 00:23:29.959856 containerd[1593]: time="2025-05-17T00:23:29.959838645Z" level=info msg="Ensure that sandbox cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721 in task-service has been cleanup successfully" May 17 00:23:29.966270 kubelet[2656]: I0517 00:23:29.966243 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:29.970085 containerd[1593]: time="2025-05-17T00:23:29.969505367Z" level=info msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" May 17 00:23:29.970085 containerd[1593]: time="2025-05-17T00:23:29.969695054Z" level=info msg="Ensure that sandbox 761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df in task-service has been cleanup successfully" May 17 00:23:29.977018 kubelet[2656]: I0517 00:23:29.974820 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:29.977865 containerd[1593]: time="2025-05-17T00:23:29.977706646Z" level=info msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" May 17 00:23:29.982730 containerd[1593]: time="2025-05-17T00:23:29.982682029Z" level=info msg="Ensure that sandbox b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f in task-service has been cleanup successfully" May 17 00:23:29.987562 kubelet[2656]: I0517 00:23:29.987525 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:29.993056 containerd[1593]: time="2025-05-17T00:23:29.992792246Z" level=info msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" May 17 00:23:29.993493 containerd[1593]: time="2025-05-17T00:23:29.993468851Z" level=info msg="Ensure that sandbox e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880 in task-service has been cleanup successfully" May 17 00:23:29.995232 kubelet[2656]: I0517 00:23:29.995179 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:29.997081 containerd[1593]: time="2025-05-17T00:23:29.997037891Z" level=info msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" May 17 00:23:29.999023 containerd[1593]: time="2025-05-17T00:23:29.998162617Z" level=info msg="Ensure that sandbox 8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce in task-service has been cleanup successfully" May 17 00:23:30.011568 kubelet[2656]: I0517 00:23:30.011326 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:30.021996 containerd[1593]: time="2025-05-17T00:23:30.020711907Z" level=info msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" May 17 00:23:30.021996 containerd[1593]: time="2025-05-17T00:23:30.020960522Z" level=info msg="Ensure that sandbox 7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a in task-service has been cleanup successfully" May 17 00:23:30.029376 kubelet[2656]: I0517 00:23:30.029068 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:30.034270 containerd[1593]: time="2025-05-17T00:23:30.033637180Z" level=info msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" May 17 00:23:30.034270 containerd[1593]: time="2025-05-17T00:23:30.033934694Z" level=info msg="Ensure that sandbox e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8 in task-service has been cleanup successfully" May 17 00:23:30.128028 containerd[1593]: time="2025-05-17T00:23:30.127969592Z" level=error msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" failed" error="failed to destroy network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.128936 kubelet[2656]: E0517 00:23:30.128881 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:30.129573 kubelet[2656]: E0517 00:23:30.129185 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721"} May 17 00:23:30.129573 kubelet[2656]: E0517 00:23:30.129488 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"908046ab-b728-4b75-9998-2e33cadd94e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.129573 kubelet[2656]: E0517 00:23:30.129528 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"908046ab-b728-4b75-9998-2e33cadd94e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rlk2g" podUID="908046ab-b728-4b75-9998-2e33cadd94e3" May 17 00:23:30.205699 containerd[1593]: time="2025-05-17T00:23:30.205084920Z" level=error msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" failed" error="failed to destroy network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.209470 kubelet[2656]: E0517 00:23:30.208799 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:30.209470 kubelet[2656]: E0517 00:23:30.208870 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df"} May 17 00:23:30.209470 kubelet[2656]: E0517 00:23:30.208931 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec01702b-c063-4f68-ba46-afbe1753b0e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.209470 kubelet[2656]: E0517 00:23:30.208972 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec01702b-c063-4f68-ba46-afbe1753b0e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" podUID="ec01702b-c063-4f68-ba46-afbe1753b0e5" May 17 00:23:30.213582 containerd[1593]: time="2025-05-17T00:23:30.213509461Z" level=error msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" failed" error="failed to destroy network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.214185 kubelet[2656]: E0517 00:23:30.213925 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:30.214185 kubelet[2656]: E0517 00:23:30.214015 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f"} May 17 00:23:30.214185 kubelet[2656]: E0517 00:23:30.214074 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b005f59-212f-4f5e-ba82-e64c93f912f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.214185 kubelet[2656]: E0517 00:23:30.214108 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b005f59-212f-4f5e-ba82-e64c93f912f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:30.233402 containerd[1593]: time="2025-05-17T00:23:30.233334790Z" level=error msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" failed" error="failed to destroy network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.234289 kubelet[2656]: E0517 00:23:30.234149 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:30.234289 kubelet[2656]: E0517 00:23:30.234232 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f"} May 17 00:23:30.234778 kubelet[2656]: E0517 00:23:30.234315 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b6836be-a527-42a6-b488-c24f1f3b7b87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.234778 kubelet[2656]: E0517 00:23:30.234350 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b6836be-a527-42a6-b488-c24f1f3b7b87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" podUID="1b6836be-a527-42a6-b488-c24f1f3b7b87" May 17 00:23:30.243866 containerd[1593]: time="2025-05-17T00:23:30.243816128Z" level=error msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" failed" error="failed to destroy network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.245026 kubelet[2656]: E0517 00:23:30.244301 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:30.245026 kubelet[2656]: E0517 00:23:30.244383 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce"} May 17 00:23:30.245026 kubelet[2656]: E0517 00:23:30.244924 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.245026 kubelet[2656]: E0517 00:23:30.244969 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" podUID="f6d3fda2-c7fd-4936-b8bb-491f8f0ede83" May 17 00:23:30.246788 containerd[1593]: time="2025-05-17T00:23:30.246681982Z" level=error msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" failed" error="failed to destroy network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.247097 kubelet[2656]: E0517 00:23:30.246974 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:30.247097 kubelet[2656]: E0517 00:23:30.247045 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8"} May 17 00:23:30.247201 kubelet[2656]: E0517 00:23:30.247100 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.247201 kubelet[2656]: E0517 00:23:30.247137 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85ddd9d5d8-dbs7j" podUID="7ea02198-dcec-4a4d-8c1a-9550aa04b601" May 17 00:23:30.248992 containerd[1593]: time="2025-05-17T00:23:30.248406456Z" level=error msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" failed" error="failed to destroy network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.250872 containerd[1593]: time="2025-05-17T00:23:30.250783017Z" level=error msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" failed" error="failed to destroy network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:30.251504 kubelet[2656]: E0517 00:23:30.251101 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:30.251504 kubelet[2656]: E0517 00:23:30.251182 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a"} May 17 00:23:30.251504 kubelet[2656]: E0517 00:23:30.251230 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c131439b-80e0-49bc-a36e-7509ece2f8e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.251504 kubelet[2656]: E0517 00:23:30.251269 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c131439b-80e0-49bc-a36e-7509ece2f8e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bk5wd" podUID="c131439b-80e0-49bc-a36e-7509ece2f8e2" May 17 00:23:30.255912 kubelet[2656]: E0517 00:23:30.255850 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:30.256469 kubelet[2656]: E0517 00:23:30.256253 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880"} May 17 00:23:30.256469 kubelet[2656]: E0517 00:23:30.256339 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7aa0df5-b560-4539-8078-1b99b64b6387\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:30.256469 kubelet[2656]: E0517 00:23:30.256392 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7aa0df5-b560-4539-8078-1b99b64b6387\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mfjj7" podUID="c7aa0df5-b560-4539-8078-1b99b64b6387" May 17 00:23:36.014148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352058036.mount: Deactivated successfully. May 17 00:23:36.226312 containerd[1593]: time="2025-05-17T00:23:36.169242162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:23:36.243494 containerd[1593]: time="2025-05-17T00:23:36.243393738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:36.286632 containerd[1593]: time="2025-05-17T00:23:36.286383155Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:36.289599 containerd[1593]: time="2025-05-17T00:23:36.288104618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:36.297205 containerd[1593]: time="2025-05-17T00:23:36.297126943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 7.346305439s" May 17 00:23:36.297506 containerd[1593]: time="2025-05-17T00:23:36.297475402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:23:36.432300 containerd[1593]: time="2025-05-17T00:23:36.432206442Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:23:36.454135 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:36.455220 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:36.454699 systemd-resolved[1477]: Flushed all caches. May 17 00:23:36.595360 containerd[1593]: time="2025-05-17T00:23:36.595048203Z" level=info msg="CreateContainer within sandbox \"de610652788aaa7213c14b7731f32e80cf7ced2d9799e344314e8a16eeb41386\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b737be5e6264a5faf5cd11f3fbb4543e4a7b59f833df9f382573251db3d6445d\"" May 17 00:23:36.604569 containerd[1593]: time="2025-05-17T00:23:36.604399640Z" level=info msg="StartContainer for \"b737be5e6264a5faf5cd11f3fbb4543e4a7b59f833df9f382573251db3d6445d\"" May 17 00:23:36.662903 kubelet[2656]: I0517 00:23:36.661392 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:36.664773 kubelet[2656]: E0517 00:23:36.664363 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:36.857905 containerd[1593]: time="2025-05-17T00:23:36.857718177Z" level=info msg="StartContainer for \"b737be5e6264a5faf5cd11f3fbb4543e4a7b59f833df9f382573251db3d6445d\" returns successfully" May 17 00:23:37.061464 kubelet[2656]: E0517 00:23:37.058186 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:37.133361 kubelet[2656]: I0517 00:23:37.107807 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pkrzg" podStartSLOduration=1.398536027 podStartE2EDuration="17.101383899s" podCreationTimestamp="2025-05-17 00:23:20 +0000 UTC" firstStartedPulling="2025-05-17 00:23:20.603326371 +0000 UTC m=+24.007163408" lastFinishedPulling="2025-05-17 00:23:36.30617422 +0000 UTC m=+39.710011280" observedRunningTime="2025-05-17 00:23:37.098935603 +0000 UTC m=+40.502772672" watchObservedRunningTime="2025-05-17 00:23:37.101383899 +0000 UTC m=+40.505220990" May 17 00:23:37.152744 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:23:37.152917 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:23:37.319101 containerd[1593]: time="2025-05-17T00:23:37.319046393Z" level=info msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.435 [INFO][3883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.436 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" iface="eth0" netns="/var/run/netns/cni-ab350e67-5de0-4492-476b-c6739f2dab81" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.436 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" iface="eth0" netns="/var/run/netns/cni-ab350e67-5de0-4492-476b-c6739f2dab81" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.440 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" iface="eth0" netns="/var/run/netns/cni-ab350e67-5de0-4492-476b-c6739f2dab81" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.440 [INFO][3883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.440 [INFO][3883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.654 [INFO][3896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.663 [INFO][3896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.663 [INFO][3896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.685 [WARNING][3896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.686 [INFO][3896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.688 [INFO][3896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:37.695320 containerd[1593]: 2025-05-17 00:23:37.692 [INFO][3883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:23:37.698358 containerd[1593]: time="2025-05-17T00:23:37.697080487Z" level=info msg="TearDown network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" successfully" May 17 00:23:37.698358 containerd[1593]: time="2025-05-17T00:23:37.697120270Z" level=info msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" returns successfully" May 17 00:23:37.699969 systemd[1]: run-netns-cni\x2dab350e67\x2d5de0\x2d4492\x2d476b\x2dc6739f2dab81.mount: Deactivated successfully. May 17 00:23:37.794043 kubelet[2656]: I0517 00:23:37.793977 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7rmj\" (UniqueName: \"kubernetes.io/projected/7ea02198-dcec-4a4d-8c1a-9550aa04b601-kube-api-access-s7rmj\") pod \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " May 17 00:23:37.797463 kubelet[2656]: I0517 00:23:37.796860 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-ca-bundle\") pod \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " May 17 00:23:37.797463 kubelet[2656]: I0517 00:23:37.796941 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-backend-key-pair\") pod \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\" (UID: \"7ea02198-dcec-4a4d-8c1a-9550aa04b601\") " May 17 00:23:37.814944 kubelet[2656]: I0517 00:23:37.813297 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ea02198-dcec-4a4d-8c1a-9550aa04b601" (UID: "7ea02198-dcec-4a4d-8c1a-9550aa04b601"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:23:37.825661 kubelet[2656]: I0517 00:23:37.825604 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea02198-dcec-4a4d-8c1a-9550aa04b601-kube-api-access-s7rmj" (OuterVolumeSpecName: "kube-api-access-s7rmj") pod "7ea02198-dcec-4a4d-8c1a-9550aa04b601" (UID: "7ea02198-dcec-4a4d-8c1a-9550aa04b601"). InnerVolumeSpecName "kube-api-access-s7rmj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:23:37.826281 kubelet[2656]: I0517 00:23:37.825949 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ea02198-dcec-4a4d-8c1a-9550aa04b601" (UID: "7ea02198-dcec-4a4d-8c1a-9550aa04b601"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:23:37.827842 systemd[1]: var-lib-kubelet-pods-7ea02198\x2ddcec\x2d4a4d\x2d8c1a\x2d9550aa04b601-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds7rmj.mount: Deactivated successfully. May 17 00:23:37.832531 systemd[1]: var-lib-kubelet-pods-7ea02198\x2ddcec\x2d4a4d\x2d8c1a\x2d9550aa04b601-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:23:37.897755 kubelet[2656]: I0517 00:23:37.897707 2656 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-2d1cdc348f\" DevicePath \"\"" May 17 00:23:37.897755 kubelet[2656]: I0517 00:23:37.897741 2656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7rmj\" (UniqueName: \"kubernetes.io/projected/7ea02198-dcec-4a4d-8c1a-9550aa04b601-kube-api-access-s7rmj\") on node \"ci-4081.3.3-n-2d1cdc348f\" DevicePath \"\"" May 17 00:23:37.897755 kubelet[2656]: I0517 00:23:37.897751 2656 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ea02198-dcec-4a4d-8c1a-9550aa04b601-whisker-ca-bundle\") on node \"ci-4081.3.3-n-2d1cdc348f\" DevicePath \"\"" May 17 00:23:38.062756 kubelet[2656]: I0517 00:23:38.061996 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:38.200541 kubelet[2656]: I0517 00:23:38.200483 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b963ab05-965a-4613-9925-a8179bee8a6a-whisker-backend-key-pair\") pod \"whisker-88b996598-x9bfz\" (UID: \"b963ab05-965a-4613-9925-a8179bee8a6a\") " pod="calico-system/whisker-88b996598-x9bfz" May 17 00:23:38.200723 kubelet[2656]: I0517 00:23:38.200575 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89xk\" (UniqueName: \"kubernetes.io/projected/b963ab05-965a-4613-9925-a8179bee8a6a-kube-api-access-d89xk\") pod \"whisker-88b996598-x9bfz\" (UID: \"b963ab05-965a-4613-9925-a8179bee8a6a\") " pod="calico-system/whisker-88b996598-x9bfz" May 17 00:23:38.200723 kubelet[2656]: I0517 00:23:38.200606 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b963ab05-965a-4613-9925-a8179bee8a6a-whisker-ca-bundle\") pod \"whisker-88b996598-x9bfz\" (UID: \"b963ab05-965a-4613-9925-a8179bee8a6a\") " pod="calico-system/whisker-88b996598-x9bfz" May 17 00:23:38.460135 containerd[1593]: time="2025-05-17T00:23:38.459770054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88b996598-x9bfz,Uid:b963ab05-965a-4613-9925-a8179bee8a6a,Namespace:calico-system,Attempt:0,}" May 17 00:23:38.500697 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:38.501780 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:38.500706 systemd-resolved[1477]: Flushed all caches. May 17 00:23:38.675330 systemd-networkd[1220]: calif176767afdd: Link UP May 17 00:23:38.675665 systemd-networkd[1220]: calif176767afdd: Gained carrier May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.521 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.536 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0 whisker-88b996598- calico-system b963ab05-965a-4613-9925-a8179bee8a6a 905 0 2025-05-17 00:23:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:88b996598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f whisker-88b996598-x9bfz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif176767afdd [] [] }} ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.536 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.586 [INFO][3933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" HandleID="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.586 [INFO][3933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" HandleID="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b3020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"whisker-88b996598-x9bfz", "timestamp":"2025-05-17 00:23:38.586405164 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.586 [INFO][3933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.586 [INFO][3933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.586 [INFO][3933] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.598 [INFO][3933] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.610 [INFO][3933] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.617 [INFO][3933] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.619 [INFO][3933] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.622 [INFO][3933] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.622 [INFO][3933] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.625 [INFO][3933] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0 May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.634 [INFO][3933] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.643 [INFO][3933] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.193/26] block=192.168.24.192/26 handle="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.643 [INFO][3933] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.193/26] handle="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.643 [INFO][3933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:38.707052 containerd[1593]: 2025-05-17 00:23:38.643 [INFO][3933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.193/26] IPv6=[] ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" HandleID="k8s-pod-network.0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.647 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0", GenerateName:"whisker-88b996598-", Namespace:"calico-system", SelfLink:"", UID:"b963ab05-965a-4613-9925-a8179bee8a6a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"88b996598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"whisker-88b996598-x9bfz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif176767afdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.647 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.193/32] ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.647 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif176767afdd ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.672 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.677 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0", GenerateName:"whisker-88b996598-", Namespace:"calico-system", SelfLink:"", UID:"b963ab05-965a-4613-9925-a8179bee8a6a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"88b996598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0", Pod:"whisker-88b996598-x9bfz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif176767afdd", MAC:"e2:52:68:98:cd:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:38.708084 containerd[1593]: 2025-05-17 00:23:38.701 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0" Namespace="calico-system" Pod="whisker-88b996598-x9bfz" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--88b996598--x9bfz-eth0" May 17 00:23:38.743142 kubelet[2656]: I0517 00:23:38.742618 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea02198-dcec-4a4d-8c1a-9550aa04b601" path="/var/lib/kubelet/pods/7ea02198-dcec-4a4d-8c1a-9550aa04b601/volumes" May 17 00:23:38.800800 containerd[1593]: time="2025-05-17T00:23:38.796390360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:38.800800 containerd[1593]: time="2025-05-17T00:23:38.796714539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:38.800800 containerd[1593]: time="2025-05-17T00:23:38.796732191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:38.800800 containerd[1593]: time="2025-05-17T00:23:38.796995083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:38.998603 containerd[1593]: time="2025-05-17T00:23:38.996819809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88b996598-x9bfz,Uid:b963ab05-965a-4613-9925-a8179bee8a6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cc5af938018f6c1201a62e2858c708d0bc9b71ead298d3f287043b7a7b567e0\"" May 17 00:23:39.046685 containerd[1593]: time="2025-05-17T00:23:39.041022188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:39.342401 containerd[1593]: time="2025-05-17T00:23:39.342066627Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:39.343643 containerd[1593]: time="2025-05-17T00:23:39.343231748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:39.343643 containerd[1593]: time="2025-05-17T00:23:39.343287018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:39.348514 kubelet[2656]: E0517 00:23:39.348304 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:39.351442 kubelet[2656]: E0517 00:23:39.350163 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:39.366634 kubelet[2656]: E0517 00:23:39.366541 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dee365f02e9f4c97935324a2c6e9b0b6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:39.371660 containerd[1593]: time="2025-05-17T00:23:39.371607390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:39.433497 kernel: bpftool[4109]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:23:39.592651 containerd[1593]: time="2025-05-17T00:23:39.592393812Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:39.593214 containerd[1593]: time="2025-05-17T00:23:39.593164485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:39.595237 containerd[1593]: time="2025-05-17T00:23:39.593307988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:39.595346 kubelet[2656]: E0517 00:23:39.593557 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:39.595346 kubelet[2656]: E0517 00:23:39.593662 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:39.595920 kubelet[2656]: E0517 00:23:39.593815 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:39.604046 kubelet[2656]: E0517 00:23:39.603600 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:23:39.772710 systemd-networkd[1220]: vxlan.calico: Link UP May 17 00:23:39.772720 systemd-networkd[1220]: vxlan.calico: Gained carrier May 17 00:23:40.101034 systemd-networkd[1220]: calif176767afdd: Gained IPv6LL May 17 00:23:40.139281 kubelet[2656]: E0517 00:23:40.138142 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:23:41.060694 systemd-networkd[1220]: vxlan.calico: Gained IPv6LL May 17 00:23:41.739782 containerd[1593]: time="2025-05-17T00:23:41.739664092Z" level=info msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.810 [INFO][4197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.810 [INFO][4197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" iface="eth0" netns="/var/run/netns/cni-10f5b47d-9bb4-8004-12dc-d1dfc1ea1bd0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.811 [INFO][4197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" iface="eth0" netns="/var/run/netns/cni-10f5b47d-9bb4-8004-12dc-d1dfc1ea1bd0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.811 [INFO][4197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" iface="eth0" netns="/var/run/netns/cni-10f5b47d-9bb4-8004-12dc-d1dfc1ea1bd0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.811 [INFO][4197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.811 [INFO][4197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.848 [INFO][4204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.848 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.849 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.856 [WARNING][4204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.856 [INFO][4204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.858 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:41.864908 containerd[1593]: 2025-05-17 00:23:41.860 [INFO][4197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:23:41.866824 containerd[1593]: time="2025-05-17T00:23:41.865664207Z" level=info msg="TearDown network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" successfully" May 17 00:23:41.866824 containerd[1593]: time="2025-05-17T00:23:41.865709381Z" level=info msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" returns successfully" May 17 00:23:41.867885 systemd[1]: run-netns-cni\x2d10f5b47d\x2d9bb4\x2d8004\x2d12dc\x2dd1dfc1ea1bd0.mount: Deactivated successfully. May 17 00:23:41.868938 containerd[1593]: time="2025-05-17T00:23:41.868897762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-84rn9,Uid:7b005f59-212f-4f5e-ba82-e64c93f912f7,Namespace:calico-system,Attempt:1,}" May 17 00:23:42.033459 systemd-networkd[1220]: cali20e7cbbbd4b: Link UP May 17 00:23:42.035192 systemd-networkd[1220]: cali20e7cbbbd4b: Gained carrier May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.939 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0 goldmane-8f77d7b6c- calico-system 7b005f59-212f-4f5e-ba82-e64c93f912f7 929 0 2025-05-17 00:23:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f goldmane-8f77d7b6c-84rn9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali20e7cbbbd4b [] [] }} ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.939 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.976 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" HandleID="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.976 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" HandleID="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003319a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"goldmane-8f77d7b6c-84rn9", "timestamp":"2025-05-17 00:23:41.976651857 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.976 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.976 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.977 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.986 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:41.996 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.002 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.004 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.008 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.008 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.011 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.016 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.025 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.194/26] block=192.168.24.192/26 handle="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.025 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.194/26] handle="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.025 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:42.062011 containerd[1593]: 2025-05-17 00:23:42.025 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.194/26] IPv6=[] ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" HandleID="k8s-pod-network.40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.028 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"7b005f59-212f-4f5e-ba82-e64c93f912f7", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"goldmane-8f77d7b6c-84rn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e7cbbbd4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.028 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.194/32] ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.028 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20e7cbbbd4b ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.036 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.037 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"7b005f59-212f-4f5e-ba82-e64c93f912f7", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f", Pod:"goldmane-8f77d7b6c-84rn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e7cbbbd4b", MAC:"f2:1c:c5:1a:1e:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:42.063297 containerd[1593]: 2025-05-17 00:23:42.055 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f" Namespace="calico-system" Pod="goldmane-8f77d7b6c-84rn9" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:23:42.089603 containerd[1593]: time="2025-05-17T00:23:42.089147800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:42.089967 containerd[1593]: time="2025-05-17T00:23:42.089917396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:42.090368 containerd[1593]: time="2025-05-17T00:23:42.090239621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.090876 containerd[1593]: time="2025-05-17T00:23:42.090760242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.170533 containerd[1593]: time="2025-05-17T00:23:42.170388693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-84rn9,Uid:7b005f59-212f-4f5e-ba82-e64c93f912f7,Namespace:calico-system,Attempt:1,} returns sandbox id \"40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f\"" May 17 00:23:42.173144 containerd[1593]: time="2025-05-17T00:23:42.172891962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:42.378733 containerd[1593]: time="2025-05-17T00:23:42.378384273Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:42.379494 containerd[1593]: time="2025-05-17T00:23:42.379449901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:42.379810 containerd[1593]: time="2025-05-17T00:23:42.379456274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:42.380071 kubelet[2656]: E0517 00:23:42.380009 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:42.380563 kubelet[2656]: E0517 00:23:42.380101 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:42.380563 kubelet[2656]: E0517 00:23:42.380265 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cz82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-84rn9_calico-system(7b005f59-212f-4f5e-ba82-e64c93f912f7): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:42.381666 kubelet[2656]: E0517 00:23:42.381571 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:42.740678 containerd[1593]: time="2025-05-17T00:23:42.739817326Z" level=info msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.811 [INFO][4290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.812 [INFO][4290] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" iface="eth0" netns="/var/run/netns/cni-bde067ba-64c4-9444-cd0d-a2ee909ebe33" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.812 [INFO][4290] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" iface="eth0" netns="/var/run/netns/cni-bde067ba-64c4-9444-cd0d-a2ee909ebe33" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.812 [INFO][4290] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" iface="eth0" netns="/var/run/netns/cni-bde067ba-64c4-9444-cd0d-a2ee909ebe33" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.812 [INFO][4290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.812 [INFO][4290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.843 [INFO][4297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.843 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.844 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.851 [WARNING][4297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.851 [INFO][4297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.854 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:42.859298 containerd[1593]: 2025-05-17 00:23:42.856 [INFO][4290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:23:42.860187 containerd[1593]: time="2025-05-17T00:23:42.859501567Z" level=info msg="TearDown network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" successfully" May 17 00:23:42.860187 containerd[1593]: time="2025-05-17T00:23:42.859540189Z" level=info msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" returns successfully" May 17 00:23:42.860721 containerd[1593]: time="2025-05-17T00:23:42.860679213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfjj7,Uid:c7aa0df5-b560-4539-8078-1b99b64b6387,Namespace:calico-system,Attempt:1,}" May 17 00:23:42.869178 systemd[1]: run-containerd-runc-k8s.io-40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f-runc.34XuhJ.mount: Deactivated successfully. May 17 00:23:42.869378 systemd[1]: run-netns-cni\x2dbde067ba\x2d64c4\x2d9444\x2dcd0d\x2da2ee909ebe33.mount: Deactivated successfully. May 17 00:23:43.088118 systemd-networkd[1220]: cali0ed04338236: Link UP May 17 00:23:43.090961 systemd-networkd[1220]: cali0ed04338236: Gained carrier May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:42.974 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0 csi-node-driver- calico-system c7aa0df5-b560-4539-8078-1b99b64b6387 938 0 2025-05-17 00:23:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f csi-node-driver-mfjj7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ed04338236 [] [] }} ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:42.975 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.031 [INFO][4315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" HandleID="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.031 [INFO][4315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" HandleID="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"csi-node-driver-mfjj7", "timestamp":"2025-05-17 00:23:43.031056642 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.031 [INFO][4315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.031 [INFO][4315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.031 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.040 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.048 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.053 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.056 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.059 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.060 [INFO][4315] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.062 [INFO][4315] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.067 [INFO][4315] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.076 [INFO][4315] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.195/26] block=192.168.24.192/26 handle="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.076 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.195/26] handle="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.076 [INFO][4315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:43.114066 containerd[1593]: 2025-05-17 00:23:43.076 [INFO][4315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.195/26] IPv6=[] ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" HandleID="k8s-pod-network.c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.080 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7aa0df5-b560-4539-8078-1b99b64b6387", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"csi-node-driver-mfjj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ed04338236", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.081 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.195/32] ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.081 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ed04338236 ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.091 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.093 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7aa0df5-b560-4539-8078-1b99b64b6387", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e", Pod:"csi-node-driver-mfjj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ed04338236", MAC:"a2:eb:14:de:ab:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:43.114955 containerd[1593]: 2025-05-17 00:23:43.108 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e" Namespace="calico-system" Pod="csi-node-driver-mfjj7" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:23:43.138582 containerd[1593]: time="2025-05-17T00:23:43.138295941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:43.139091 containerd[1593]: time="2025-05-17T00:23:43.138387206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:43.139091 containerd[1593]: time="2025-05-17T00:23:43.139059735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:43.139240 containerd[1593]: time="2025-05-17T00:23:43.139211070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:43.159937 kubelet[2656]: E0517 00:23:43.156970 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:43.234843 containerd[1593]: time="2025-05-17T00:23:43.234756562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfjj7,Uid:c7aa0df5-b560-4539-8078-1b99b64b6387,Namespace:calico-system,Attempt:1,} returns sandbox id \"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e\"" May 17 00:23:43.237924 containerd[1593]: time="2025-05-17T00:23:43.237880390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:23:43.301146 systemd-networkd[1220]: cali20e7cbbbd4b: Gained IPv6LL May 17 00:23:43.739659 containerd[1593]: time="2025-05-17T00:23:43.739255963Z" level=info msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.812 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.812 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" iface="eth0" netns="/var/run/netns/cni-1f08252f-4193-97d0-f7eb-5123b4a8e0a0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.813 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" iface="eth0" netns="/var/run/netns/cni-1f08252f-4193-97d0-f7eb-5123b4a8e0a0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.814 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" iface="eth0" netns="/var/run/netns/cni-1f08252f-4193-97d0-f7eb-5123b4a8e0a0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.814 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.814 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.843 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.843 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.843 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.851 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.851 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.854 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:43.859350 containerd[1593]: 2025-05-17 00:23:43.856 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:23:43.862964 containerd[1593]: time="2025-05-17T00:23:43.859944158Z" level=info msg="TearDown network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" successfully" May 17 00:23:43.862964 containerd[1593]: time="2025-05-17T00:23:43.860149375Z" level=info msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" returns successfully" May 17 00:23:43.862964 containerd[1593]: time="2025-05-17T00:23:43.861712647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b44dc845b-2vb57,Uid:ec01702b-c063-4f68-ba46-afbe1753b0e5,Namespace:calico-system,Attempt:1,}" May 17 00:23:43.868589 systemd[1]: run-netns-cni\x2d1f08252f\x2d4193\x2d97d0\x2df7eb\x2d5123b4a8e0a0.mount: Deactivated successfully. May 17 00:23:44.029165 systemd-networkd[1220]: calid23b73b65fd: Link UP May 17 00:23:44.031386 systemd-networkd[1220]: calid23b73b65fd: Gained carrier May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.930 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0 calico-kube-controllers-6b44dc845b- calico-system ec01702b-c063-4f68-ba46-afbe1753b0e5 953 0 2025-05-17 00:23:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b44dc845b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f calico-kube-controllers-6b44dc845b-2vb57 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid23b73b65fd [] [] }} ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.931 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.970 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" HandleID="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.970 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" HandleID="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"calico-kube-controllers-6b44dc845b-2vb57", "timestamp":"2025-05-17 00:23:43.970398784 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.970 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.970 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.970 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.979 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.987 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.994 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:43.999 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.002 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.003 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.005 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2 May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.011 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.020 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.196/26] block=192.168.24.192/26 handle="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.021 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.196/26] handle="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.021 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:44.063721 containerd[1593]: 2025-05-17 00:23:44.021 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.196/26] IPv6=[] ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" HandleID="k8s-pod-network.7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.023 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0", GenerateName:"calico-kube-controllers-6b44dc845b-", Namespace:"calico-system", SelfLink:"", UID:"ec01702b-c063-4f68-ba46-afbe1753b0e5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b44dc845b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"calico-kube-controllers-6b44dc845b-2vb57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid23b73b65fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.023 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.196/32] ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.023 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid23b73b65fd ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.034 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.038 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0", GenerateName:"calico-kube-controllers-6b44dc845b-", Namespace:"calico-system", SelfLink:"", UID:"ec01702b-c063-4f68-ba46-afbe1753b0e5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b44dc845b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2", Pod:"calico-kube-controllers-6b44dc845b-2vb57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid23b73b65fd", MAC:"42:8f:3b:3e:8e:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:44.067502 containerd[1593]: 2025-05-17 00:23:44.057 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2" Namespace="calico-system" Pod="calico-kube-controllers-6b44dc845b-2vb57" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:23:44.098024 containerd[1593]: time="2025-05-17T00:23:44.097878677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:44.098024 containerd[1593]: time="2025-05-17T00:23:44.097944843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:44.098024 containerd[1593]: time="2025-05-17T00:23:44.097986846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:44.098484 containerd[1593]: time="2025-05-17T00:23:44.098149552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:44.163108 kubelet[2656]: E0517 00:23:44.161852 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:44.204388 containerd[1593]: time="2025-05-17T00:23:44.204325084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b44dc845b-2vb57,Uid:ec01702b-c063-4f68-ba46-afbe1753b0e5,Namespace:calico-system,Attempt:1,} returns sandbox id \"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2\"" May 17 00:23:44.581637 systemd-networkd[1220]: cali0ed04338236: Gained IPv6LL May 17 00:23:44.601151 containerd[1593]: time="2025-05-17T00:23:44.600091176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.601151 containerd[1593]: time="2025-05-17T00:23:44.600885515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:23:44.601151 containerd[1593]: time="2025-05-17T00:23:44.601075532Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.603333 containerd[1593]: time="2025-05-17T00:23:44.603294314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.604289 containerd[1593]: time="2025-05-17T00:23:44.604251647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.366102257s" May 17 00:23:44.604289 containerd[1593]: time="2025-05-17T00:23:44.604287313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:23:44.607794 containerd[1593]: time="2025-05-17T00:23:44.607043502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:23:44.618362 containerd[1593]: time="2025-05-17T00:23:44.618266947Z" level=info msg="CreateContainer within sandbox \"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:23:44.641233 containerd[1593]: time="2025-05-17T00:23:44.641173473Z" level=info msg="CreateContainer within sandbox \"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d1732f669cc54fc61255f60484e253a7c67c4f44ec2d3f0e5c41e5d2d8aba5b8\"" May 17 00:23:44.642209 containerd[1593]: time="2025-05-17T00:23:44.642163723Z" level=info msg="StartContainer for \"d1732f669cc54fc61255f60484e253a7c67c4f44ec2d3f0e5c41e5d2d8aba5b8\"" May 17 00:23:44.723123 containerd[1593]: time="2025-05-17T00:23:44.723030357Z" level=info msg="StartContainer for \"d1732f669cc54fc61255f60484e253a7c67c4f44ec2d3f0e5c41e5d2d8aba5b8\" returns successfully" May 17 00:23:44.745094 containerd[1593]: time="2025-05-17T00:23:44.744965031Z" level=info msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" May 17 00:23:44.749452 containerd[1593]: time="2025-05-17T00:23:44.749094745Z" level=info msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" May 17 00:23:44.755653 containerd[1593]: time="2025-05-17T00:23:44.755545635Z" level=info msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.849 [INFO][4537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.850 [INFO][4537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" iface="eth0" netns="/var/run/netns/cni-f4a96adc-fa97-1769-6e62-1b799c715548" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.851 [INFO][4537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" iface="eth0" netns="/var/run/netns/cni-f4a96adc-fa97-1769-6e62-1b799c715548" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.852 [INFO][4537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" iface="eth0" netns="/var/run/netns/cni-f4a96adc-fa97-1769-6e62-1b799c715548" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.852 [INFO][4537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.852 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.937 [INFO][4548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.938 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.938 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.950 [WARNING][4548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.950 [INFO][4548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.955 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:44.968908 containerd[1593]: 2025-05-17 00:23:44.964 [INFO][4537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:44.987177 systemd[1]: run-netns-cni\x2df4a96adc\x2dfa97\x2d1769\x2d6e62\x2d1b799c715548.mount: Deactivated successfully. May 17 00:23:45.001840 containerd[1593]: time="2025-05-17T00:23:44.977341835Z" level=info msg="TearDown network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" successfully" May 17 00:23:45.001840 containerd[1593]: time="2025-05-17T00:23:45.001826299Z" level=info msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" returns successfully" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.898 [INFO][4521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.900 [INFO][4521] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" iface="eth0" netns="/var/run/netns/cni-7c11d4b1-e566-f694-8f6e-099f754559ae" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.902 [INFO][4521] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" iface="eth0" netns="/var/run/netns/cni-7c11d4b1-e566-f694-8f6e-099f754559ae" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.907 [INFO][4521] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" iface="eth0" netns="/var/run/netns/cni-7c11d4b1-e566-f694-8f6e-099f754559ae" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.908 [INFO][4521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.908 [INFO][4521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.967 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.967 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.967 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.977 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.977 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.983 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:45.002094 containerd[1593]: 2025-05-17 00:23:44.993 [INFO][4521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:23:45.010887 containerd[1593]: time="2025-05-17T00:23:45.002247868Z" level=info msg="TearDown network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" successfully" May 17 00:23:45.010887 containerd[1593]: time="2025-05-17T00:23:45.002273375Z" level=info msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" returns successfully" May 17 00:23:45.007878 systemd[1]: run-netns-cni\x2d7c11d4b1\x2de566\x2df694\x2d8f6e\x2d099f754559ae.mount: Deactivated successfully. May 17 00:23:45.011053 kubelet[2656]: E0517 00:23:45.003267 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:45.011053 kubelet[2656]: E0517 00:23:45.010750 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:45.015552 containerd[1593]: time="2025-05-17T00:23:45.014998890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rlk2g,Uid:908046ab-b728-4b75-9998-2e33cadd94e3,Namespace:kube-system,Attempt:1,}" May 17 00:23:45.019527 containerd[1593]: time="2025-05-17T00:23:45.016776698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5wd,Uid:c131439b-80e0-49bc-a36e-7509ece2f8e2,Namespace:kube-system,Attempt:1,}" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.907 [INFO][4517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.908 [INFO][4517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" iface="eth0" netns="/var/run/netns/cni-2f2a93b8-e3d7-ff19-7d08-1adb624ef28f" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.908 [INFO][4517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" iface="eth0" netns="/var/run/netns/cni-2f2a93b8-e3d7-ff19-7d08-1adb624ef28f" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.909 [INFO][4517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" iface="eth0" netns="/var/run/netns/cni-2f2a93b8-e3d7-ff19-7d08-1adb624ef28f" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.909 [INFO][4517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.909 [INFO][4517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.981 [INFO][4556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.981 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.983 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.995 [WARNING][4556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:44.996 [INFO][4556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:45.000 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:45.019527 containerd[1593]: 2025-05-17 00:23:45.011 [INFO][4517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:45.019527 containerd[1593]: time="2025-05-17T00:23:45.017473435Z" level=info msg="TearDown network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" successfully" May 17 00:23:45.019527 containerd[1593]: time="2025-05-17T00:23:45.017510550Z" level=info msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" returns successfully" May 17 00:23:45.023969 containerd[1593]: time="2025-05-17T00:23:45.021703929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-56rjm,Uid:f6d3fda2-c7fd-4936-b8bb-491f8f0ede83,Namespace:calico-apiserver,Attempt:1,}" May 17 00:23:45.023638 systemd[1]: run-netns-cni\x2d2f2a93b8\x2de3d7\x2dff19\x2d7d08\x2d1adb624ef28f.mount: Deactivated successfully. May 17 00:23:45.298201 systemd-networkd[1220]: cali070110cdb24: Link UP May 17 00:23:45.300402 systemd-networkd[1220]: cali070110cdb24: Gained carrier May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.122 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0 calico-apiserver-667c778c59- calico-apiserver f6d3fda2-c7fd-4936-b8bb-491f8f0ede83 971 0 2025-05-17 00:23:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667c778c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f calico-apiserver-667c778c59-56rjm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali070110cdb24 [] [] }} ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.123 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.196 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" HandleID="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.196 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" HandleID="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"calico-apiserver-667c778c59-56rjm", "timestamp":"2025-05-17 00:23:45.19622347 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.196 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.196 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.196 [INFO][4607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.211 [INFO][4607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.227 [INFO][4607] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.236 [INFO][4607] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.240 [INFO][4607] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.250 [INFO][4607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.252 [INFO][4607] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.256 [INFO][4607] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.265 [INFO][4607] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.277 [INFO][4607] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.197/26] block=192.168.24.192/26 handle="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.277 [INFO][4607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.197/26] handle="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.277 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:45.353125 containerd[1593]: 2025-05-17 00:23:45.277 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.197/26] IPv6=[] ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" HandleID="k8s-pod-network.4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.288 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"calico-apiserver-667c778c59-56rjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali070110cdb24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.290 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.197/32] ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.290 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali070110cdb24 ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.302 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.303 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a", Pod:"calico-apiserver-667c778c59-56rjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali070110cdb24", MAC:"e2:35:89:b7:7d:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.354762 containerd[1593]: 2025-05-17 00:23:45.340 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-56rjm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:45.413082 systemd-networkd[1220]: calid23b73b65fd: Gained IPv6LL May 17 00:23:45.421684 containerd[1593]: time="2025-05-17T00:23:45.415948877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.422675 containerd[1593]: time="2025-05-17T00:23:45.422105374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.422675 containerd[1593]: time="2025-05-17T00:23:45.422176758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.429447 containerd[1593]: time="2025-05-17T00:23:45.428958401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.457980 systemd-networkd[1220]: califc84555782a: Link UP May 17 00:23:45.469692 systemd-networkd[1220]: califc84555782a: Gained carrier May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.141 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0 coredns-7c65d6cfc9- kube-system 908046ab-b728-4b75-9998-2e33cadd94e3 970 0 2025-05-17 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f coredns-7c65d6cfc9-rlk2g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc84555782a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.141 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.269 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" HandleID="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.271 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" HandleID="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003279f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"coredns-7c65d6cfc9-rlk2g", "timestamp":"2025-05-17 00:23:45.269072108 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.271 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.278 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.279 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.319 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.362 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.376 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.382 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.388 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.388 [INFO][4612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.393 [INFO][4612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2 May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.404 [INFO][4612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.427 [INFO][4612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.198/26] block=192.168.24.192/26 handle="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.429 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.198/26] handle="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.429 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:45.503146 containerd[1593]: 2025-05-17 00:23:45.429 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.198/26] IPv6=[] ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" HandleID="k8s-pod-network.a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.508934 containerd[1593]: 2025-05-17 00:23:45.442 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"908046ab-b728-4b75-9998-2e33cadd94e3", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"coredns-7c65d6cfc9-rlk2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84555782a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.508934 containerd[1593]: 2025-05-17 00:23:45.443 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.198/32] ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.508934 containerd[1593]: 2025-05-17 00:23:45.443 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc84555782a ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.508934 containerd[1593]: 2025-05-17 00:23:45.465 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.508934 containerd[1593]: 2025-05-17 00:23:45.473 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"908046ab-b728-4b75-9998-2e33cadd94e3", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2", Pod:"coredns-7c65d6cfc9-rlk2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84555782a", MAC:"96:04:d7:90:a9:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.509172 containerd[1593]: 2025-05-17 00:23:45.489 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rlk2g" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:23:45.578180 systemd-networkd[1220]: cali2200b02dc30: Link UP May 17 00:23:45.582177 systemd-networkd[1220]: cali2200b02dc30: Gained carrier May 17 00:23:45.588452 containerd[1593]: time="2025-05-17T00:23:45.586873579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.588452 containerd[1593]: time="2025-05-17T00:23:45.586946305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.588452 containerd[1593]: time="2025-05-17T00:23:45.586976324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.588452 containerd[1593]: time="2025-05-17T00:23:45.587074618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.172 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0 coredns-7c65d6cfc9- kube-system c131439b-80e0-49bc-a36e-7509ece2f8e2 969 0 2025-05-17 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f coredns-7c65d6cfc9-bk5wd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2200b02dc30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.172 [INFO][4589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.292 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" HandleID="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.294 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" HandleID="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"coredns-7c65d6cfc9-bk5wd", "timestamp":"2025-05-17 00:23:45.292847964 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.295 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.429 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.429 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.447 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.466 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.480 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.485 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.498 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.498 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.504 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3 May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.518 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.534 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.199/26] block=192.168.24.192/26 handle="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.534 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.199/26] handle="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.537 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:45.644056 containerd[1593]: 2025-05-17 00:23:45.537 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.199/26] IPv6=[] ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" HandleID="k8s-pod-network.fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.652832 containerd[1593]: 2025-05-17 00:23:45.550 [INFO][4589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c131439b-80e0-49bc-a36e-7509ece2f8e2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"coredns-7c65d6cfc9-bk5wd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2200b02dc30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.652832 containerd[1593]: 2025-05-17 00:23:45.553 [INFO][4589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.199/32] ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.652832 containerd[1593]: 2025-05-17 00:23:45.554 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2200b02dc30 ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.652832 containerd[1593]: 2025-05-17 00:23:45.583 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.652832 containerd[1593]: 2025-05-17 00:23:45.601 [INFO][4589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c131439b-80e0-49bc-a36e-7509ece2f8e2", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3", Pod:"coredns-7c65d6cfc9-bk5wd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2200b02dc30", MAC:"0a:d7:fa:c3:60:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:45.653063 containerd[1593]: 2025-05-17 00:23:45.625 [INFO][4589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5wd" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:45.699959 containerd[1593]: time="2025-05-17T00:23:45.699918244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-56rjm,Uid:f6d3fda2-c7fd-4936-b8bb-491f8f0ede83,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a\"" May 17 00:23:45.723065 containerd[1593]: time="2025-05-17T00:23:45.722559013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rlk2g,Uid:908046ab-b728-4b75-9998-2e33cadd94e3,Namespace:kube-system,Attempt:1,} returns sandbox id \"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2\"" May 17 00:23:45.724286 kubelet[2656]: E0517 00:23:45.723973 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:45.729291 containerd[1593]: time="2025-05-17T00:23:45.728871275Z" level=info msg="CreateContainer within sandbox \"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:23:45.739539 containerd[1593]: time="2025-05-17T00:23:45.739272358Z" level=info msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" May 17 00:23:45.744296 containerd[1593]: time="2025-05-17T00:23:45.743995512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.744296 containerd[1593]: time="2025-05-17T00:23:45.744120301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.744296 containerd[1593]: time="2025-05-17T00:23:45.744137369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.746040 containerd[1593]: time="2025-05-17T00:23:45.745448777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.813015 containerd[1593]: time="2025-05-17T00:23:45.811725979Z" level=info msg="CreateContainer within sandbox \"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36694a837462150b1f4a3c85d1277031e61e6641912d90f46c58d149a5978955\"" May 17 00:23:45.818480 containerd[1593]: time="2025-05-17T00:23:45.816526390Z" level=info msg="StartContainer for \"36694a837462150b1f4a3c85d1277031e61e6641912d90f46c58d149a5978955\"" May 17 00:23:45.882037 containerd[1593]: time="2025-05-17T00:23:45.881220570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5wd,Uid:c131439b-80e0-49bc-a36e-7509ece2f8e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3\"" May 17 00:23:45.894491 kubelet[2656]: E0517 00:23:45.888751 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:45.909596 containerd[1593]: time="2025-05-17T00:23:45.909542391Z" level=info msg="CreateContainer within sandbox \"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:23:45.936881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179406486.mount: Deactivated successfully. May 17 00:23:45.947970 containerd[1593]: time="2025-05-17T00:23:45.947890442Z" level=info msg="CreateContainer within sandbox \"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b8a69e2326d9ecd4cf64157a209e325a9956add9cf79908785914afab52d46b\"" May 17 00:23:45.950055 containerd[1593]: time="2025-05-17T00:23:45.949867486Z" level=info msg="StartContainer for \"2b8a69e2326d9ecd4cf64157a209e325a9956add9cf79908785914afab52d46b\"" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.829 [INFO][4778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.829 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" iface="eth0" netns="/var/run/netns/cni-5ab9560e-fef4-c1bc-ceda-6bcc98df16e6" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.830 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" iface="eth0" netns="/var/run/netns/cni-5ab9560e-fef4-c1bc-ceda-6bcc98df16e6" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.831 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" iface="eth0" netns="/var/run/netns/cni-5ab9560e-fef4-c1bc-ceda-6bcc98df16e6" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.831 [INFO][4778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.831 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.970 [INFO][4797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.970 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.970 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.991 [WARNING][4797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.991 [INFO][4797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:45.995 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:46.006251 containerd[1593]: 2025-05-17 00:23:46.001 [INFO][4778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:46.007840 containerd[1593]: time="2025-05-17T00:23:46.007462376Z" level=info msg="TearDown network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" successfully" May 17 00:23:46.007840 containerd[1593]: time="2025-05-17T00:23:46.007497893Z" level=info msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" returns successfully" May 17 00:23:46.011340 containerd[1593]: time="2025-05-17T00:23:46.011297926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-kg2wm,Uid:1b6836be-a527-42a6-b488-c24f1f3b7b87,Namespace:calico-apiserver,Attempt:1,}" May 17 00:23:46.038634 containerd[1593]: time="2025-05-17T00:23:46.038564913Z" level=info msg="StartContainer for \"36694a837462150b1f4a3c85d1277031e61e6641912d90f46c58d149a5978955\" returns successfully" May 17 00:23:46.160624 containerd[1593]: time="2025-05-17T00:23:46.160055062Z" level=info msg="StartContainer for \"2b8a69e2326d9ecd4cf64157a209e325a9956add9cf79908785914afab52d46b\" returns successfully" May 17 00:23:46.235308 kubelet[2656]: E0517 00:23:46.234303 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:46.257834 kubelet[2656]: E0517 00:23:46.256894 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:46.273217 kubelet[2656]: I0517 00:23:46.271955 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bk5wd" podStartSLOduration=43.271934135 podStartE2EDuration="43.271934135s" podCreationTimestamp="2025-05-17 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:46.271586153 +0000 UTC m=+49.675423215" watchObservedRunningTime="2025-05-17 00:23:46.271934135 +0000 UTC m=+49.675771189" May 17 00:23:46.342653 kubelet[2656]: I0517 00:23:46.341991 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rlk2g" podStartSLOduration=43.341966137 podStartE2EDuration="43.341966137s" podCreationTimestamp="2025-05-17 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:46.341724919 +0000 UTC m=+49.745561979" watchObservedRunningTime="2025-05-17 00:23:46.341966137 +0000 UTC m=+49.745803188" May 17 00:23:46.444143 systemd-networkd[1220]: cali62bb398fadb: Link UP May 17 00:23:46.460729 systemd-networkd[1220]: cali62bb398fadb: Gained carrier May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.137 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0 calico-apiserver-667c778c59- calico-apiserver 1b6836be-a527-42a6-b488-c24f1f3b7b87 991 0 2025-05-17 00:23:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667c778c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-2d1cdc348f calico-apiserver-667c778c59-kg2wm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62bb398fadb [] [] }} ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.139 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.282 [INFO][4889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" HandleID="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.282 [INFO][4889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" HandleID="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003323d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-2d1cdc348f", "pod":"calico-apiserver-667c778c59-kg2wm", "timestamp":"2025-05-17 00:23:46.282125187 +0000 UTC"}, Hostname:"ci-4081.3.3-n-2d1cdc348f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.282 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.282 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.282 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-2d1cdc348f' May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.334 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.354 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.366 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.373 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.381 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.382 [INFO][4889] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.386 [INFO][4889] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030 May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.404 [INFO][4889] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.426 [INFO][4889] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.200/26] block=192.168.24.192/26 handle="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.426 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.200/26] handle="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" host="ci-4081.3.3-n-2d1cdc348f" May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.427 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:46.509837 containerd[1593]: 2025-05-17 00:23:46.427 [INFO][4889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.200/26] IPv6=[] ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" HandleID="k8s-pod-network.d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.431 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b6836be-a527-42a6-b488-c24f1f3b7b87", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"", Pod:"calico-apiserver-667c778c59-kg2wm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62bb398fadb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.431 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.200/32] ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.431 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62bb398fadb ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.453 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.462 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b6836be-a527-42a6-b488-c24f1f3b7b87", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030", Pod:"calico-apiserver-667c778c59-kg2wm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62bb398fadb", MAC:"9a:26:03:a8:cc:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:46.510921 containerd[1593]: 2025-05-17 00:23:46.502 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030" Namespace="calico-apiserver" Pod="calico-apiserver-667c778c59-kg2wm" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:46.591028 containerd[1593]: time="2025-05-17T00:23:46.590511799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:46.591028 containerd[1593]: time="2025-05-17T00:23:46.590587478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:46.591028 containerd[1593]: time="2025-05-17T00:23:46.590602324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:46.591028 containerd[1593]: time="2025-05-17T00:23:46.590738634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:46.757711 systemd-networkd[1220]: cali070110cdb24: Gained IPv6LL May 17 00:23:46.778777 containerd[1593]: time="2025-05-17T00:23:46.775838762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667c778c59-kg2wm,Uid:1b6836be-a527-42a6-b488-c24f1f3b7b87,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030\"" May 17 00:23:46.886973 systemd[1]: run-netns-cni\x2d5ab9560e\x2dfef4\x2dc1bc\x2dceda\x2d6bcc98df16e6.mount: Deactivated successfully. May 17 00:23:46.950274 systemd-networkd[1220]: cali2200b02dc30: Gained IPv6LL May 17 00:23:47.267096 kubelet[2656]: E0517 00:23:47.267057 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:47.274940 systemd-networkd[1220]: califc84555782a: Gained IPv6LL May 17 00:23:47.277278 kubelet[2656]: E0517 00:23:47.275516 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:47.652605 systemd-networkd[1220]: cali62bb398fadb: Gained IPv6LL May 17 00:23:48.210144 containerd[1593]: time="2025-05-17T00:23:48.210085483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:48.211473 containerd[1593]: time="2025-05-17T00:23:48.211143930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:23:48.212174 containerd[1593]: time="2025-05-17T00:23:48.212138594Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:48.215218 containerd[1593]: time="2025-05-17T00:23:48.215169798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:48.216448 containerd[1593]: time="2025-05-17T00:23:48.216212157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.609126925s" May 17 00:23:48.216448 containerd[1593]: time="2025-05-17T00:23:48.216269648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:23:48.218626 containerd[1593]: time="2025-05-17T00:23:48.218106840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:23:48.245159 containerd[1593]: time="2025-05-17T00:23:48.245100251Z" level=info msg="CreateContainer within sandbox \"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:23:48.259235 containerd[1593]: time="2025-05-17T00:23:48.258309152Z" level=info msg="CreateContainer within sandbox \"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305\"" May 17 00:23:48.262403 containerd[1593]: time="2025-05-17T00:23:48.262352026Z" level=info msg="StartContainer for \"e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305\"" May 17 00:23:48.263308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732169293.mount: Deactivated successfully. May 17 00:23:48.277766 kubelet[2656]: E0517 00:23:48.277501 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:48.279307 kubelet[2656]: E0517 00:23:48.278329 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:48.402552 containerd[1593]: time="2025-05-17T00:23:48.401102026Z" level=info msg="StartContainer for \"e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305\" returns successfully" May 17 00:23:48.420769 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:48.425214 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:48.420813 systemd-resolved[1477]: Flushed all caches. May 17 00:23:49.281465 kubelet[2656]: E0517 00:23:49.281393 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:49.285792 kubelet[2656]: E0517 00:23:49.284270 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:49.321113 kubelet[2656]: I0517 00:23:49.320465 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b44dc845b-2vb57" podStartSLOduration=25.308867281 podStartE2EDuration="29.320400618s" podCreationTimestamp="2025-05-17 00:23:20 +0000 UTC" firstStartedPulling="2025-05-17 00:23:44.206172187 +0000 UTC m=+47.610009225" lastFinishedPulling="2025-05-17 00:23:48.217705524 +0000 UTC m=+51.621542562" observedRunningTime="2025-05-17 00:23:49.319565898 +0000 UTC m=+52.723402953" watchObservedRunningTime="2025-05-17 00:23:49.320400618 +0000 UTC m=+52.724237678" May 17 00:23:49.687758 kubelet[2656]: I0517 00:23:49.686731 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:49.808816 containerd[1593]: time="2025-05-17T00:23:49.808756227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.813533 containerd[1593]: time="2025-05-17T00:23:49.813456691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:23:49.814332 containerd[1593]: time="2025-05-17T00:23:49.814261424Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.817880 containerd[1593]: time="2025-05-17T00:23:49.817839752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:49.819633 containerd[1593]: time="2025-05-17T00:23:49.819546948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.601406879s" May 17 00:23:49.819885 containerd[1593]: time="2025-05-17T00:23:49.819774831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:23:49.822549 containerd[1593]: time="2025-05-17T00:23:49.822192184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:23:49.825209 containerd[1593]: time="2025-05-17T00:23:49.825082160Z" level=info msg="CreateContainer within sandbox \"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:23:49.847080 containerd[1593]: time="2025-05-17T00:23:49.846843953Z" level=info msg="CreateContainer within sandbox \"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"390374064710f66ead1d438c1d280239ad4c4adb7a3dac406bcbfa360a5956c5\"" May 17 00:23:49.847876 containerd[1593]: time="2025-05-17T00:23:49.847837253Z" level=info msg="StartContainer for \"390374064710f66ead1d438c1d280239ad4c4adb7a3dac406bcbfa360a5956c5\"" May 17 00:23:50.006913 containerd[1593]: time="2025-05-17T00:23:50.006751408Z" level=info msg="StartContainer for \"390374064710f66ead1d438c1d280239ad4c4adb7a3dac406bcbfa360a5956c5\" returns successfully" May 17 00:23:50.319046 kubelet[2656]: I0517 00:23:50.318716 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mfjj7" podStartSLOduration=23.73489976 podStartE2EDuration="30.318692023s" podCreationTimestamp="2025-05-17 00:23:20 +0000 UTC" firstStartedPulling="2025-05-17 00:23:43.23718815 +0000 UTC m=+46.641025191" lastFinishedPulling="2025-05-17 00:23:49.820980403 +0000 UTC m=+53.224817454" observedRunningTime="2025-05-17 00:23:50.315944609 +0000 UTC m=+53.719781669" watchObservedRunningTime="2025-05-17 00:23:50.318692023 +0000 UTC m=+53.722529082" May 17 00:23:51.509193 kubelet[2656]: I0517 00:23:51.508239 2656 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:23:51.536442 kubelet[2656]: I0517 00:23:51.534143 2656 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:23:52.459302 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:52.453517 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:52.453586 systemd-resolved[1477]: Flushed all caches. May 17 00:23:53.669291 containerd[1593]: time="2025-05-17T00:23:53.669154691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:53.671873 containerd[1593]: time="2025-05-17T00:23:53.671798726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:23:53.673016 containerd[1593]: time="2025-05-17T00:23:53.672962218Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:53.694609 containerd[1593]: time="2025-05-17T00:23:53.694558457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:53.696826 containerd[1593]: time="2025-05-17T00:23:53.695948692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.873707915s" May 17 00:23:53.696826 containerd[1593]: time="2025-05-17T00:23:53.696612207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:23:53.703149 containerd[1593]: time="2025-05-17T00:23:53.702792232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:23:53.707535 containerd[1593]: time="2025-05-17T00:23:53.707338623Z" level=info msg="CreateContainer within sandbox \"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:23:53.783316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91460321.mount: Deactivated successfully. May 17 00:23:53.823858 containerd[1593]: time="2025-05-17T00:23:53.822566876Z" level=info msg="CreateContainer within sandbox \"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ab5d708593e8734ef98e21dc2914ee3e48cab9ccfcd9c31f565e7c0142b1228\"" May 17 00:23:53.828041 containerd[1593]: time="2025-05-17T00:23:53.827984526Z" level=info msg="StartContainer for \"0ab5d708593e8734ef98e21dc2914ee3e48cab9ccfcd9c31f565e7c0142b1228\"" May 17 00:23:54.039234 containerd[1593]: time="2025-05-17T00:23:54.039125457Z" level=info msg="StartContainer for \"0ab5d708593e8734ef98e21dc2914ee3e48cab9ccfcd9c31f565e7c0142b1228\" returns successfully" May 17 00:23:54.225713 containerd[1593]: time="2025-05-17T00:23:54.225485033Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:54.240594 containerd[1593]: time="2025-05-17T00:23:54.239536333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:23:54.252640 containerd[1593]: time="2025-05-17T00:23:54.252452881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 549.571463ms" May 17 00:23:54.252903 containerd[1593]: time="2025-05-17T00:23:54.252880823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:23:54.255246 containerd[1593]: time="2025-05-17T00:23:54.255121873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:54.259167 containerd[1593]: time="2025-05-17T00:23:54.259131268Z" level=info msg="CreateContainer within sandbox \"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:23:54.356147 containerd[1593]: time="2025-05-17T00:23:54.355643849Z" level=info msg="CreateContainer within sandbox \"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e775fda86c812f7b62b59a04d896debc25218d4df1d31b07373a7891e871639\"" May 17 00:23:54.358976 containerd[1593]: time="2025-05-17T00:23:54.357548867Z" level=info msg="StartContainer for \"8e775fda86c812f7b62b59a04d896debc25218d4df1d31b07373a7891e871639\"" May 17 00:23:54.376312 kubelet[2656]: I0517 00:23:54.376248 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667c778c59-56rjm" podStartSLOduration=31.38485319 podStartE2EDuration="39.376219215s" podCreationTimestamp="2025-05-17 00:23:15 +0000 UTC" firstStartedPulling="2025-05-17 00:23:45.709097841 +0000 UTC m=+49.112934879" lastFinishedPulling="2025-05-17 00:23:53.70046385 +0000 UTC m=+57.104300904" observedRunningTime="2025-05-17 00:23:54.37372335 +0000 UTC m=+57.777560412" watchObservedRunningTime="2025-05-17 00:23:54.376219215 +0000 UTC m=+57.780056275" May 17 00:23:54.506884 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:54.504497 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:54.504534 systemd-resolved[1477]: Flushed all caches. May 17 00:23:54.509828 containerd[1593]: time="2025-05-17T00:23:54.506695528Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:54.510325 containerd[1593]: time="2025-05-17T00:23:54.510262202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:54.516146 containerd[1593]: time="2025-05-17T00:23:54.515986243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:54.522997 kubelet[2656]: E0517 00:23:54.516446 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:54.526014 kubelet[2656]: E0517 00:23:54.525886 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:54.556722 kubelet[2656]: E0517 00:23:54.556639 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dee365f02e9f4c97935324a2c6e9b0b6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:54.559696 containerd[1593]: time="2025-05-17T00:23:54.559202779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:54.651508 containerd[1593]: time="2025-05-17T00:23:54.649215270Z" level=info msg="StartContainer for \"8e775fda86c812f7b62b59a04d896debc25218d4df1d31b07373a7891e871639\" returns successfully" May 17 00:23:54.796765 containerd[1593]: time="2025-05-17T00:23:54.794665080Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:54.805613 containerd[1593]: time="2025-05-17T00:23:54.805538534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:54.807376 containerd[1593]: time="2025-05-17T00:23:54.807278220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:54.814719 kubelet[2656]: E0517 00:23:54.807813 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:54.814719 kubelet[2656]: E0517 00:23:54.812328 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:54.814719 kubelet[2656]: E0517 00:23:54.812498 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:54.819850 kubelet[2656]: E0517 00:23:54.819646 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:23:55.361266 kubelet[2656]: I0517 00:23:55.360930 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:55.378870 kubelet[2656]: I0517 00:23:55.378780 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667c778c59-kg2wm" podStartSLOduration=32.911111815 podStartE2EDuration="40.378759496s" podCreationTimestamp="2025-05-17 00:23:15 +0000 UTC" firstStartedPulling="2025-05-17 00:23:46.786736948 +0000 UTC m=+50.190574001" lastFinishedPulling="2025-05-17 00:23:54.254384622 +0000 UTC m=+57.658221682" observedRunningTime="2025-05-17 00:23:55.376899455 +0000 UTC m=+58.780736514" watchObservedRunningTime="2025-05-17 00:23:55.378759496 +0000 UTC m=+58.782596555" May 17 00:23:56.365714 kubelet[2656]: I0517 00:23:56.365239 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:56.958144 containerd[1593]: time="2025-05-17T00:23:56.956851514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:57.071073 kubelet[2656]: I0517 00:23:57.069936 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:57.115859 containerd[1593]: time="2025-05-17T00:23:57.114538258Z" level=info msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" May 17 00:23:57.248207 containerd[1593]: time="2025-05-17T00:23:57.247971634Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:57.249396 containerd[1593]: time="2025-05-17T00:23:57.249168482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:57.249396 containerd[1593]: time="2025-05-17T00:23:57.249231729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:57.251504 kubelet[2656]: E0517 00:23:57.250605 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:57.251504 kubelet[2656]: E0517 00:23:57.250687 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:57.251504 kubelet[2656]: E0517 00:23:57.250870 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cz82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-84rn9_calico-system(7b005f59-212f-4f5e-ba82-e64c93f912f7): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:57.256020 kubelet[2656]: E0517 00:23:57.254039 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:23:57.797023 systemd[1]: Started sshd@7-134.199.214.88:22-139.178.68.195:36810.service - OpenSSH per-connection server daemon (139.178.68.195:36810). May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:57.445 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a", Pod:"calico-apiserver-667c778c59-56rjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali070110cdb24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:57.462 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:57.462 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" iface="eth0" netns="" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:57.462 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:57.462 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.020 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.038 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.044 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.096 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.096 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.103 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:58.129651 containerd[1593]: 2025-05-17 00:23:58.116 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.129651 containerd[1593]: time="2025-05-17T00:23:58.127705303Z" level=info msg="TearDown network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" successfully" May 17 00:23:58.129651 containerd[1593]: time="2025-05-17T00:23:58.127957488Z" level=info msg="StopPodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" returns successfully" May 17 00:23:58.149895 sshd[5247]: Accepted publickey for core from 139.178.68.195 port 36810 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:58.152934 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:58.194072 systemd-logind[1555]: New session 8 of user core. May 17 00:23:58.202013 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:23:58.264135 containerd[1593]: time="2025-05-17T00:23:58.264005277Z" level=info msg="RemovePodSandbox for \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" May 17 00:23:58.267386 containerd[1593]: time="2025-05-17T00:23:58.267248368Z" level=info msg="Forcibly stopping sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\"" May 17 00:23:58.483793 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:23:58.469272 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:23:58.469309 systemd-resolved[1477]: Flushed all caches. May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.376 [WARNING][5261] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d3fda2-c7fd-4936-b8bb-491f8f0ede83", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"4444116d7f343a7a249cff079e5bfd7480b02a7d681ac544324aa37fc7d4050a", Pod:"calico-apiserver-667c778c59-56rjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali070110cdb24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.377 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.377 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" iface="eth0" netns="" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.377 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.377 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.548 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.552 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.552 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.598 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.598 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" HandleID="k8s-pod-network.8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--56rjm-eth0" May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.619 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:58.637656 containerd[1593]: 2025-05-17 00:23:58.632 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce" May 17 00:23:58.658743 containerd[1593]: time="2025-05-17T00:23:58.640574958Z" level=info msg="TearDown network for sandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" successfully" May 17 00:23:58.722512 containerd[1593]: time="2025-05-17T00:23:58.721494954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:23:58.746266 containerd[1593]: time="2025-05-17T00:23:58.738657555Z" level=info msg="RemovePodSandbox \"8939c29a7fd2d6429c8d598417bcfad195ab56a4b53b7668bfa951a6ca75c4ce\" returns successfully" May 17 00:23:58.793080 containerd[1593]: time="2025-05-17T00:23:58.793019241Z" level=info msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.001 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c131439b-80e0-49bc-a36e-7509ece2f8e2", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3", Pod:"coredns-7c65d6cfc9-bk5wd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2200b02dc30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.001 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.001 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" iface="eth0" netns="" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.001 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.001 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.107 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.107 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.107 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.124 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.124 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.129 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:59.149378 containerd[1593]: 2025-05-17 00:23:59.139 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.149378 containerd[1593]: time="2025-05-17T00:23:59.148863539Z" level=info msg="TearDown network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" successfully" May 17 00:23:59.149378 containerd[1593]: time="2025-05-17T00:23:59.148899346Z" level=info msg="StopPodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" returns successfully" May 17 00:23:59.158809 containerd[1593]: time="2025-05-17T00:23:59.149479227Z" level=info msg="RemovePodSandbox for \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" May 17 00:23:59.158809 containerd[1593]: time="2025-05-17T00:23:59.149510864Z" level=info msg="Forcibly stopping sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\"" May 17 00:23:59.378011 sshd[5247]: pam_unix(sshd:session): session closed for user core May 17 00:23:59.405524 systemd[1]: sshd@7-134.199.214.88:22-139.178.68.195:36810.service: Deactivated successfully. May 17 00:23:59.409942 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. May 17 00:23:59.415407 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:23:59.437088 systemd-logind[1555]: Removed session 8. May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.389 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c131439b-80e0-49bc-a36e-7509ece2f8e2", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"fc988a49b18a75c14a6c7e7ebe4055b1c07612e1f74a66c62bdd49d2900d14f3", Pod:"coredns-7c65d6cfc9-bk5wd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2200b02dc30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.389 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.389 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" iface="eth0" netns="" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.389 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.403 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.494 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.495 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.495 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.522 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.523 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" HandleID="k8s-pod-network.7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--bk5wd-eth0" May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.545 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:59.568182 containerd[1593]: 2025-05-17 00:23:59.556 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a" May 17 00:23:59.570201 containerd[1593]: time="2025-05-17T00:23:59.568227811Z" level=info msg="TearDown network for sandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" successfully" May 17 00:23:59.577134 containerd[1593]: time="2025-05-17T00:23:59.576807513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:23:59.577134 containerd[1593]: time="2025-05-17T00:23:59.576893291Z" level=info msg="RemovePodSandbox \"7ca5c76ff5af576c9f7a0beefb7c082cc9c36d67da7dcdcd96e7e19d372a2a8a\" returns successfully" May 17 00:23:59.578618 containerd[1593]: time="2025-05-17T00:23:59.578130907Z" level=info msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" May 17 00:23:59.760449 systemd[1]: run-containerd-runc-k8s.io-e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305-runc.pQCgbJ.mount: Deactivated successfully. May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.724 [WARNING][5338] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b6836be-a527-42a6-b488-c24f1f3b7b87", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030", Pod:"calico-apiserver-667c778c59-kg2wm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62bb398fadb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.724 [INFO][5338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.724 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" iface="eth0" netns="" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.724 [INFO][5338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.724 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.804 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.805 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.805 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.827 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.827 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.832 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:59.846038 containerd[1593]: 2025-05-17 00:23:59.843 [INFO][5338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:23:59.849168 containerd[1593]: time="2025-05-17T00:23:59.846104378Z" level=info msg="TearDown network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" successfully" May 17 00:23:59.849168 containerd[1593]: time="2025-05-17T00:23:59.846139961Z" level=info msg="StopPodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" returns successfully" May 17 00:23:59.849168 containerd[1593]: time="2025-05-17T00:23:59.848052784Z" level=info msg="RemovePodSandbox for \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" May 17 00:23:59.849168 containerd[1593]: time="2025-05-17T00:23:59.848085284Z" level=info msg="Forcibly stopping sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\"" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.934 [WARNING][5376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0", GenerateName:"calico-apiserver-667c778c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b6836be-a527-42a6-b488-c24f1f3b7b87", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667c778c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"d2e459fe7d5101dc3c3bf61a6e331da8dab24ebea81467ab5a075d2578625030", Pod:"calico-apiserver-667c778c59-kg2wm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62bb398fadb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.935 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.935 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" iface="eth0" netns="" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.935 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.935 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.981 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.982 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.982 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.993 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.993 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" HandleID="k8s-pod-network.b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--apiserver--667c778c59--kg2wm-eth0" May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:23:59.996 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:00.016808 containerd[1593]: 2025-05-17 00:24:00.012 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f" May 17 00:24:00.016808 containerd[1593]: time="2025-05-17T00:24:00.016738150Z" level=info msg="TearDown network for sandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" successfully" May 17 00:24:00.029715 containerd[1593]: time="2025-05-17T00:24:00.029646625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:00.029935 containerd[1593]: time="2025-05-17T00:24:00.029801701Z" level=info msg="RemovePodSandbox \"b099c164d17ff40c0212fe844211da11b2365094938ac6412348818145b7091f\" returns successfully" May 17 00:24:00.032511 containerd[1593]: time="2025-05-17T00:24:00.032024873Z" level=info msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.140 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0", GenerateName:"calico-kube-controllers-6b44dc845b-", Namespace:"calico-system", SelfLink:"", UID:"ec01702b-c063-4f68-ba46-afbe1753b0e5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b44dc845b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2", Pod:"calico-kube-controllers-6b44dc845b-2vb57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid23b73b65fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.141 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.141 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" iface="eth0" netns="" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.141 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.141 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.242 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.244 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.244 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.256 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.256 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.261 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:00.274380 containerd[1593]: 2025-05-17 00:24:00.271 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.276724 containerd[1593]: time="2025-05-17T00:24:00.274590879Z" level=info msg="TearDown network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" successfully" May 17 00:24:00.276724 containerd[1593]: time="2025-05-17T00:24:00.274620723Z" level=info msg="StopPodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" returns successfully" May 17 00:24:00.276867 containerd[1593]: time="2025-05-17T00:24:00.276801350Z" level=info msg="RemovePodSandbox for \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" May 17 00:24:00.276867 containerd[1593]: time="2025-05-17T00:24:00.276845094Z" level=info msg="Forcibly stopping sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\"" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.336 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0", GenerateName:"calico-kube-controllers-6b44dc845b-", Namespace:"calico-system", SelfLink:"", UID:"ec01702b-c063-4f68-ba46-afbe1753b0e5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b44dc845b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"7aa53d9dd9751427b657e9795949646942b11cbdf6f951e95b53b5de72afaba2", Pod:"calico-kube-controllers-6b44dc845b-2vb57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid23b73b65fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.336 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.336 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" iface="eth0" netns="" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.336 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.336 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.394 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.394 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.394 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.406 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.407 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" HandleID="k8s-pod-network.761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-calico--kube--controllers--6b44dc845b--2vb57-eth0" May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.409 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:00.414348 containerd[1593]: 2025-05-17 00:24:00.411 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df" May 17 00:24:00.416637 containerd[1593]: time="2025-05-17T00:24:00.414570500Z" level=info msg="TearDown network for sandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" successfully" May 17 00:24:00.451160 containerd[1593]: time="2025-05-17T00:24:00.450293435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:00.451760 containerd[1593]: time="2025-05-17T00:24:00.451521368Z" level=info msg="RemovePodSandbox \"761f2d69ccba17cdcc3ca1459620bd72f28fc8811c20aea7015f968e487609df\" returns successfully" May 17 00:24:00.452622 containerd[1593]: time="2025-05-17T00:24:00.452139157Z" level=info msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" May 17 00:24:00.520134 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:00.520404 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:00.520442 systemd-resolved[1477]: Flushed all caches. May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.541 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"7b005f59-212f-4f5e-ba82-e64c93f912f7", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f", Pod:"goldmane-8f77d7b6c-84rn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e7cbbbd4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.542 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.542 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" iface="eth0" netns="" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.542 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.542 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.605 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.611 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.611 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.622 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.622 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.625 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:00.635978 containerd[1593]: 2025-05-17 00:24:00.631 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.635978 containerd[1593]: time="2025-05-17T00:24:00.635838292Z" level=info msg="TearDown network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" successfully" May 17 00:24:00.639793 containerd[1593]: time="2025-05-17T00:24:00.635870592Z" level=info msg="StopPodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" returns successfully" May 17 00:24:00.640631 containerd[1593]: time="2025-05-17T00:24:00.640152787Z" level=info msg="RemovePodSandbox for \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" May 17 00:24:00.640631 containerd[1593]: time="2025-05-17T00:24:00.640202196Z" level=info msg="Forcibly stopping sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\"" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.823 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"7b005f59-212f-4f5e-ba82-e64c93f912f7", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"40792ae43f460806b3689d21cae944be1e1cc430f85f4d5c8e09ef9ee49ce09f", Pod:"goldmane-8f77d7b6c-84rn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20e7cbbbd4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.823 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.825 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" iface="eth0" netns="" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.825 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.825 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.898 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.898 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.898 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.908 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.908 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" HandleID="k8s-pod-network.cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-goldmane--8f77d7b6c--84rn9-eth0" May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.912 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:00.924566 containerd[1593]: 2025-05-17 00:24:00.917 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f" May 17 00:24:00.924566 containerd[1593]: time="2025-05-17T00:24:00.921799840Z" level=info msg="TearDown network for sandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" successfully" May 17 00:24:00.949436 containerd[1593]: time="2025-05-17T00:24:00.939647387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:00.951514 containerd[1593]: time="2025-05-17T00:24:00.950263299Z" level=info msg="RemovePodSandbox \"cd5a28e348c0c7024b0a49874ad5218da5b5a118b50d922937fc952613f1e88f\" returns successfully" May 17 00:24:00.954175 containerd[1593]: time="2025-05-17T00:24:00.954127131Z" level=info msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.093 [WARNING][5494] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.093 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.093 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" iface="eth0" netns="" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.093 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.093 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.256 [INFO][5501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.259 [INFO][5501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.259 [INFO][5501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.298 [WARNING][5501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.298 [INFO][5501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.327 [INFO][5501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:01.342142 containerd[1593]: 2025-05-17 00:24:01.335 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.343400 containerd[1593]: time="2025-05-17T00:24:01.343220302Z" level=info msg="TearDown network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" successfully" May 17 00:24:01.343400 containerd[1593]: time="2025-05-17T00:24:01.343260285Z" level=info msg="StopPodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" returns successfully" May 17 00:24:01.344536 containerd[1593]: time="2025-05-17T00:24:01.343921851Z" level=info msg="RemovePodSandbox for \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" May 17 00:24:01.344536 containerd[1593]: time="2025-05-17T00:24:01.343965124Z" level=info msg="Forcibly stopping sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\"" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.424 [WARNING][5516] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" WorkloadEndpoint="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.424 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.424 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" iface="eth0" netns="" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.424 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.424 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.469 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.470 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.470 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.492 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.492 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" HandleID="k8s-pod-network.e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-whisker--85ddd9d5d8--dbs7j-eth0" May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.499 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:01.521449 containerd[1593]: 2025-05-17 00:24:01.514 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8" May 17 00:24:01.523577 containerd[1593]: time="2025-05-17T00:24:01.523530447Z" level=info msg="TearDown network for sandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" successfully" May 17 00:24:01.533614 containerd[1593]: time="2025-05-17T00:24:01.533561409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:01.533748 containerd[1593]: time="2025-05-17T00:24:01.533650022Z" level=info msg="RemovePodSandbox \"e1675506dc1b3a0c8fe5e6db0c01c723d66f5c4b4e43cbcb5fb0739d55ee06a8\" returns successfully" May 17 00:24:01.536103 containerd[1593]: time="2025-05-17T00:24:01.534547067Z" level=info msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.680 [WARNING][5539] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"908046ab-b728-4b75-9998-2e33cadd94e3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2", Pod:"coredns-7c65d6cfc9-rlk2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84555782a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.686 [INFO][5539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.692 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" iface="eth0" netns="" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.692 [INFO][5539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.692 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.780 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.782 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.782 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.801 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.801 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.805 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:01.817910 containerd[1593]: 2025-05-17 00:24:01.811 [INFO][5539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:01.819470 containerd[1593]: time="2025-05-17T00:24:01.818504539Z" level=info msg="TearDown network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" successfully" May 17 00:24:01.819470 containerd[1593]: time="2025-05-17T00:24:01.818568023Z" level=info msg="StopPodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" returns successfully" May 17 00:24:01.835459 containerd[1593]: time="2025-05-17T00:24:01.835320237Z" level=info msg="RemovePodSandbox for \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" May 17 00:24:01.835459 containerd[1593]: time="2025-05-17T00:24:01.835387512Z" level=info msg="Forcibly stopping sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\"" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:01.960 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"908046ab-b728-4b75-9998-2e33cadd94e3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"a90900dc7b7a405b150d01bbc54590745f1cefc84f0cdc52846b96cfa3addef2", Pod:"coredns-7c65d6cfc9-rlk2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84555782a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:01.961 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:01.961 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" iface="eth0" netns="" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:01.961 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:01.962 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.109 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.110 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.113 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.148 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.149 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" HandleID="k8s-pod-network.cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-coredns--7c65d6cfc9--rlk2g-eth0" May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.159 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:02.191023 containerd[1593]: 2025-05-17 00:24:02.172 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721" May 17 00:24:02.196486 containerd[1593]: time="2025-05-17T00:24:02.194174227Z" level=info msg="TearDown network for sandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" successfully" May 17 00:24:02.205482 containerd[1593]: time="2025-05-17T00:24:02.204702000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:02.205482 containerd[1593]: time="2025-05-17T00:24:02.204814769Z" level=info msg="RemovePodSandbox \"cc446e1e78f5fff9c2b3d3cf45782b0926185d8676df320c580c37999e5fc721\" returns successfully" May 17 00:24:02.304692 containerd[1593]: time="2025-05-17T00:24:02.302212978Z" level=info msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" May 17 00:24:02.567714 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:02.566561 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:02.566581 systemd-resolved[1477]: Flushed all caches. May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.446 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7aa0df5-b560-4539-8078-1b99b64b6387", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e", Pod:"csi-node-driver-mfjj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ed04338236", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.446 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.447 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" iface="eth0" netns="" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.447 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.447 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.526 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.526 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.527 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.571 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.571 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.576 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:02.588075 containerd[1593]: 2025-05-17 00:24:02.582 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.593036 containerd[1593]: time="2025-05-17T00:24:02.588147202Z" level=info msg="TearDown network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" successfully" May 17 00:24:02.593036 containerd[1593]: time="2025-05-17T00:24:02.588180464Z" level=info msg="StopPodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" returns successfully" May 17 00:24:02.593036 containerd[1593]: time="2025-05-17T00:24:02.589525105Z" level=info msg="RemovePodSandbox for \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" May 17 00:24:02.593036 containerd[1593]: time="2025-05-17T00:24:02.589567134Z" level=info msg="Forcibly stopping sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\"" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.710 [WARNING][5606] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7aa0df5-b560-4539-8078-1b99b64b6387", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-2d1cdc348f", ContainerID:"c82612c88b2fe7619fd9d1ef29e43e5e8d56fc4a5fc21dd4757ad5daffb1f37e", Pod:"csi-node-driver-mfjj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ed04338236", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.710 [INFO][5606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.710 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" iface="eth0" netns="" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.711 [INFO][5606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.711 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.858 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.859 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.859 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.867 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.869 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" HandleID="k8s-pod-network.e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" Workload="ci--4081.3.3--n--2d1cdc348f-k8s-csi--node--driver--mfjj7-eth0" May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.874 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:02.894918 containerd[1593]: 2025-05-17 00:24:02.885 [INFO][5606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880" May 17 00:24:02.894918 containerd[1593]: time="2025-05-17T00:24:02.892646420Z" level=info msg="TearDown network for sandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" successfully" May 17 00:24:02.897472 containerd[1593]: time="2025-05-17T00:24:02.896887486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:02.897472 containerd[1593]: time="2025-05-17T00:24:02.896972978Z" level=info msg="RemovePodSandbox \"e7176534fe241a32b1e177585483e0bef201047b2c90614b87561cff3d5bc880\" returns successfully" May 17 00:24:04.393788 systemd[1]: Started sshd@8-134.199.214.88:22-139.178.68.195:42154.service - OpenSSH per-connection server daemon (139.178.68.195:42154). May 17 00:24:04.554091 sshd[5621]: Accepted publickey for core from 139.178.68.195 port 42154 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:04.558454 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:04.576330 systemd-logind[1555]: New session 9 of user core. May 17 00:24:04.582815 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:24:05.067995 sshd[5621]: pam_unix(sshd:session): session closed for user core May 17 00:24:05.075543 systemd[1]: sshd@8-134.199.214.88:22-139.178.68.195:42154.service: Deactivated successfully. May 17 00:24:05.082657 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:24:05.082898 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. May 17 00:24:05.088118 systemd-logind[1555]: Removed session 9. May 17 00:24:09.871445 kubelet[2656]: E0517 00:24:09.863660 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:24:10.086808 systemd[1]: Started sshd@9-134.199.214.88:22-139.178.68.195:42160.service - OpenSSH per-connection server daemon (139.178.68.195:42160). May 17 00:24:10.232580 sshd[5638]: Accepted publickey for core from 139.178.68.195 port 42160 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:10.235752 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:10.248103 systemd-logind[1555]: New session 10 of user core. May 17 00:24:10.253831 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:24:10.519298 sshd[5638]: pam_unix(sshd:session): session closed for user core May 17 00:24:10.531956 systemd[1]: Started sshd@10-134.199.214.88:22-139.178.68.195:42162.service - OpenSSH per-connection server daemon (139.178.68.195:42162). May 17 00:24:10.532513 systemd[1]: sshd@9-134.199.214.88:22-139.178.68.195:42160.service: Deactivated successfully. May 17 00:24:10.539497 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. May 17 00:24:10.540766 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:24:10.542707 systemd-logind[1555]: Removed session 10. May 17 00:24:10.635985 sshd[5650]: Accepted publickey for core from 139.178.68.195 port 42162 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:10.637942 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:10.649182 systemd-logind[1555]: New session 11 of user core. May 17 00:24:10.653843 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:24:10.977394 sshd[5650]: pam_unix(sshd:session): session closed for user core May 17 00:24:10.988968 systemd[1]: Started sshd@11-134.199.214.88:22-139.178.68.195:42172.service - OpenSSH per-connection server daemon (139.178.68.195:42172). May 17 00:24:10.997090 systemd[1]: sshd@10-134.199.214.88:22-139.178.68.195:42162.service: Deactivated successfully. May 17 00:24:11.007481 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:24:11.007764 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. May 17 00:24:11.019885 systemd-logind[1555]: Removed session 11. May 17 00:24:11.074532 sshd[5662]: Accepted publickey for core from 139.178.68.195 port 42172 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:11.076953 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:11.096265 systemd-logind[1555]: New session 12 of user core. May 17 00:24:11.102620 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:24:11.485978 sshd[5662]: pam_unix(sshd:session): session closed for user core May 17 00:24:11.496698 systemd[1]: sshd@11-134.199.214.88:22-139.178.68.195:42172.service: Deactivated successfully. May 17 00:24:11.505165 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. May 17 00:24:11.505357 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:24:11.508674 systemd-logind[1555]: Removed session 12. May 17 00:24:11.758309 kubelet[2656]: E0517 00:24:11.758115 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:24:12.421686 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:12.428117 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:12.421698 systemd-resolved[1477]: Flushed all caches. May 17 00:24:16.506509 systemd[1]: Started sshd@12-134.199.214.88:22-139.178.68.195:52460.service - OpenSSH per-connection server daemon (139.178.68.195:52460). May 17 00:24:16.623664 sshd[5680]: Accepted publickey for core from 139.178.68.195 port 52460 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:16.623494 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:16.632672 systemd-logind[1555]: New session 13 of user core. May 17 00:24:16.638066 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:24:17.131792 sshd[5680]: pam_unix(sshd:session): session closed for user core May 17 00:24:17.143655 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. May 17 00:24:17.144870 systemd[1]: sshd@12-134.199.214.88:22-139.178.68.195:52460.service: Deactivated successfully. May 17 00:24:17.152645 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:24:17.155148 systemd-logind[1555]: Removed session 13. May 17 00:24:18.436596 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:18.436639 systemd-resolved[1477]: Flushed all caches. May 17 00:24:18.441697 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:19.747269 kubelet[2656]: E0517 00:24:19.747209 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:19.752993 kubelet[2656]: E0517 00:24:19.748272 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:21.800382 containerd[1593]: time="2025-05-17T00:24:21.800304476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:22.139457 systemd[1]: Started sshd@13-134.199.214.88:22-139.178.68.195:52474.service - OpenSSH per-connection server daemon (139.178.68.195:52474). May 17 00:24:22.142398 containerd[1593]: time="2025-05-17T00:24:22.142276526Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:22.144357 containerd[1593]: time="2025-05-17T00:24:22.143951421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:22.160951 kubelet[2656]: E0517 00:24:22.160867 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:22.171940 kubelet[2656]: E0517 00:24:22.161406 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:22.210578 containerd[1593]: time="2025-05-17T00:24:22.144089841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:22.223314 kubelet[2656]: E0517 00:24:22.223134 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dee365f02e9f4c97935324a2c6e9b0b6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:22.230738 containerd[1593]: time="2025-05-17T00:24:22.230323261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:22.308323 sshd[5748]: Accepted publickey for core from 139.178.68.195 port 52474 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:22.312854 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:22.331518 systemd-logind[1555]: New session 14 of user core. May 17 00:24:22.337163 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:24:22.460367 containerd[1593]: time="2025-05-17T00:24:22.460004436Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:22.463378 containerd[1593]: time="2025-05-17T00:24:22.463136866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:22.464395 containerd[1593]: time="2025-05-17T00:24:22.463707423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:22.464728 kubelet[2656]: E0517 00:24:22.463975 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:22.464728 kubelet[2656]: E0517 00:24:22.464080 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:22.464728 kubelet[2656]: E0517 00:24:22.464291 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d89xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-88b996598-x9bfz_calico-system(b963ab05-965a-4613-9925-a8179bee8a6a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:22.467421 kubelet[2656]: E0517 00:24:22.466767 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:24:22.472095 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:22.468540 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:22.468601 systemd-resolved[1477]: Flushed all caches. May 17 00:24:22.745941 containerd[1593]: time="2025-05-17T00:24:22.745623872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:22.749685 sshd[5748]: pam_unix(sshd:session): session closed for user core May 17 00:24:22.761253 systemd[1]: sshd@13-134.199.214.88:22-139.178.68.195:52474.service: Deactivated successfully. May 17 00:24:22.776964 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:24:22.780838 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. May 17 00:24:22.788137 systemd-logind[1555]: Removed session 14. May 17 00:24:22.985563 containerd[1593]: time="2025-05-17T00:24:22.985506226Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:22.988819 containerd[1593]: time="2025-05-17T00:24:22.987909598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:22.988819 containerd[1593]: time="2025-05-17T00:24:22.987974453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:22.992601 kubelet[2656]: E0517 00:24:22.988582 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:22.992601 kubelet[2656]: E0517 00:24:22.988643 2656 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:23.001521 kubelet[2656]: E0517 00:24:22.998749 2656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cz82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-84rn9_calico-system(7b005f59-212f-4f5e-ba82-e64c93f912f7): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:23.010443 kubelet[2656]: E0517 00:24:23.004443 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:24:24.516521 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:24.516560 systemd-resolved[1477]: Flushed all caches. May 17 00:24:24.519214 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:27.763596 systemd[1]: Started sshd@14-134.199.214.88:22-139.178.68.195:41590.service - OpenSSH per-connection server daemon (139.178.68.195:41590). May 17 00:24:27.839744 sshd[5767]: Accepted publickey for core from 139.178.68.195 port 41590 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:27.841066 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:27.851991 systemd-logind[1555]: New session 15 of user core. May 17 00:24:27.859589 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:24:28.171865 sshd[5767]: pam_unix(sshd:session): session closed for user core May 17 00:24:28.178681 systemd[1]: sshd@14-134.199.214.88:22-139.178.68.195:41590.service: Deactivated successfully. May 17 00:24:28.189172 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:24:28.197033 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. May 17 00:24:28.199924 systemd-logind[1555]: Removed session 15. May 17 00:24:29.618880 systemd[1]: run-containerd-runc-k8s.io-e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305-runc.caNTe1.mount: Deactivated successfully. May 17 00:24:32.740375 kubelet[2656]: E0517 00:24:32.740326 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:33.183819 systemd[1]: Started sshd@15-134.199.214.88:22-139.178.68.195:41592.service - OpenSSH per-connection server daemon (139.178.68.195:41592). May 17 00:24:33.274491 sshd[5802]: Accepted publickey for core from 139.178.68.195 port 41592 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:33.281076 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:33.299816 systemd-logind[1555]: New session 16 of user core. May 17 00:24:33.305935 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:24:33.627220 sshd[5802]: pam_unix(sshd:session): session closed for user core May 17 00:24:33.640768 systemd[1]: Started sshd@16-134.199.214.88:22-139.178.68.195:54506.service - OpenSSH per-connection server daemon (139.178.68.195:54506). May 17 00:24:33.641651 systemd[1]: sshd@15-134.199.214.88:22-139.178.68.195:41592.service: Deactivated successfully. May 17 00:24:33.644609 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. May 17 00:24:33.647505 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:24:33.650951 systemd-logind[1555]: Removed session 16. May 17 00:24:33.716089 sshd[5814]: Accepted publickey for core from 139.178.68.195 port 54506 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:33.722127 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:33.739741 systemd-logind[1555]: New session 17 of user core. May 17 00:24:33.750453 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:24:34.244169 sshd[5814]: pam_unix(sshd:session): session closed for user core May 17 00:24:34.258270 systemd[1]: Started sshd@17-134.199.214.88:22-139.178.68.195:54508.service - OpenSSH per-connection server daemon (139.178.68.195:54508). May 17 00:24:34.259304 systemd[1]: sshd@16-134.199.214.88:22-139.178.68.195:54506.service: Deactivated successfully. May 17 00:24:34.270685 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. May 17 00:24:34.272076 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:24:34.275044 systemd-logind[1555]: Removed session 17. May 17 00:24:34.354750 sshd[5827]: Accepted publickey for core from 139.178.68.195 port 54508 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:34.358717 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:34.368899 systemd-logind[1555]: New session 18 of user core. May 17 00:24:34.374725 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:24:34.442126 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:34.436548 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:34.436559 systemd-resolved[1477]: Flushed all caches. May 17 00:24:36.116272 kubelet[2656]: E0517 00:24:36.116200 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:24:36.487621 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:36.484539 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:36.484549 systemd-resolved[1477]: Flushed all caches. May 17 00:24:36.891562 kubelet[2656]: E0517 00:24:36.829963 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:38.489037 sshd[5827]: pam_unix(sshd:session): session closed for user core May 17 00:24:38.496503 systemd[1]: Started sshd@18-134.199.214.88:22-139.178.68.195:54514.service - OpenSSH per-connection server daemon (139.178.68.195:54514). May 17 00:24:38.543019 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:38.542562 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:38.542575 systemd-resolved[1477]: Flushed all caches. May 17 00:24:38.544339 systemd[1]: sshd@17-134.199.214.88:22-139.178.68.195:54508.service: Deactivated successfully. May 17 00:24:38.579056 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:24:38.584355 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. May 17 00:24:38.612204 systemd-logind[1555]: Removed session 18. May 17 00:24:38.772696 sshd[5844]: Accepted publickey for core from 139.178.68.195 port 54514 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:38.777033 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:38.807939 systemd-logind[1555]: New session 19 of user core. May 17 00:24:38.812879 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:24:39.001085 kubelet[2656]: E0517 00:24:39.000578 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:24:40.603633 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:40.603093 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:40.603106 systemd-resolved[1477]: Flushed all caches. May 17 00:24:40.793668 sshd[5844]: pam_unix(sshd:session): session closed for user core May 17 00:24:40.844896 systemd[1]: Started sshd@19-134.199.214.88:22-139.178.68.195:54530.service - OpenSSH per-connection server daemon (139.178.68.195:54530). May 17 00:24:40.914904 systemd[1]: sshd@18-134.199.214.88:22-139.178.68.195:54514.service: Deactivated successfully. May 17 00:24:40.945942 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:24:40.956106 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. May 17 00:24:40.973896 systemd-logind[1555]: Removed session 19. May 17 00:24:41.257669 sshd[5862]: Accepted publickey for core from 139.178.68.195 port 54530 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:41.288504 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:41.311883 systemd-logind[1555]: New session 20 of user core. May 17 00:24:41.318958 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:24:42.629840 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:42.639449 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:42.629851 systemd-resolved[1477]: Flushed all caches. May 17 00:24:42.939233 sshd[5862]: pam_unix(sshd:session): session closed for user core May 17 00:24:42.948161 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. May 17 00:24:42.949698 systemd[1]: sshd@19-134.199.214.88:22-139.178.68.195:54530.service: Deactivated successfully. May 17 00:24:42.964184 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:24:42.967306 systemd-logind[1555]: Removed session 20. May 17 00:24:46.975376 kubelet[2656]: E0517 00:24:46.973618 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:47.953731 systemd[1]: Started sshd@20-134.199.214.88:22-139.178.68.195:36310.service - OpenSSH per-connection server daemon (139.178.68.195:36310). May 17 00:24:48.046223 sshd[5882]: Accepted publickey for core from 139.178.68.195 port 36310 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:48.050091 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:48.065077 systemd-logind[1555]: New session 21 of user core. May 17 00:24:48.069859 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:24:48.311769 sshd[5882]: pam_unix(sshd:session): session closed for user core May 17 00:24:48.319863 systemd[1]: sshd@20-134.199.214.88:22-139.178.68.195:36310.service: Deactivated successfully. May 17 00:24:48.331611 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. May 17 00:24:48.331984 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:24:48.338646 systemd-logind[1555]: Removed session 21. May 17 00:24:48.814710 kubelet[2656]: E0517 00:24:48.814577 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a" May 17 00:24:52.740788 kubelet[2656]: E0517 00:24:52.740700 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-84rn9" podUID="7b005f59-212f-4f5e-ba82-e64c93f912f7" May 17 00:24:53.330216 systemd[1]: Started sshd@21-134.199.214.88:22-139.178.68.195:36326.service - OpenSSH per-connection server daemon (139.178.68.195:36326). May 17 00:24:53.509454 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 36326 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:53.517084 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:53.532968 systemd-logind[1555]: New session 22 of user core. May 17 00:24:53.536780 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:24:54.383655 sshd[5918]: pam_unix(sshd:session): session closed for user core May 17 00:24:54.391980 systemd[1]: sshd@21-134.199.214.88:22-139.178.68.195:36326.service: Deactivated successfully. May 17 00:24:54.398279 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. May 17 00:24:54.399035 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:24:54.401851 systemd-logind[1555]: Removed session 22. May 17 00:24:54.472654 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:54.469766 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:54.469774 systemd-resolved[1477]: Flushed all caches. May 17 00:24:56.519335 systemd-journald[1132]: Under memory pressure, flushing caches. May 17 00:24:56.518619 systemd-resolved[1477]: Under memory pressure, flushing caches. May 17 00:24:56.518630 systemd-resolved[1477]: Flushed all caches. May 17 00:24:56.745556 kubelet[2656]: E0517 00:24:56.745501 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:59.400109 systemd[1]: Started sshd@22-134.199.214.88:22-139.178.68.195:46926.service - OpenSSH per-connection server daemon (139.178.68.195:46926). May 17 00:24:59.506371 sshd[5934]: Accepted publickey for core from 139.178.68.195 port 46926 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:24:59.509042 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:59.516357 systemd-logind[1555]: New session 23 of user core. May 17 00:24:59.525691 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:24:59.661280 systemd[1]: run-containerd-runc-k8s.io-e507d86e3300ee59c3221bfe9444d2e89948ffde5ad4d09e6bb6c21a734a7305-runc.Neyqpd.mount: Deactivated successfully. May 17 00:24:59.922844 sshd[5934]: pam_unix(sshd:session): session closed for user core May 17 00:24:59.942907 systemd[1]: sshd@22-134.199.214.88:22-139.178.68.195:46926.service: Deactivated successfully. May 17 00:24:59.954135 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:24:59.954326 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. May 17 00:24:59.959512 systemd-logind[1555]: Removed session 23. May 17 00:25:01.742476 kubelet[2656]: E0517 00:25:01.741171 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-88b996598-x9bfz" podUID="b963ab05-965a-4613-9925-a8179bee8a6a"