Feb 13 20:15:48.151519 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:15:48.151566 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:48.151589 kernel: BIOS-provided physical RAM map: Feb 13 20:15:48.151602 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:15:48.151615 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:15:48.151628 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:15:48.151642 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:15:48.151655 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:15:48.151667 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:15:48.151682 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:15:48.151695 kernel: NX (Execute Disable) protection: active Feb 13 20:15:48.151707 kernel: APIC: Static calls initialized Feb 13 20:15:48.151729 kernel: SMBIOS 2.8 present. Feb 13 20:15:48.151742 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:15:48.151756 kernel: Hypervisor detected: KVM Feb 13 20:15:48.151771 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:15:48.151789 kernel: kvm-clock: using sched offset of 4127211967 cycles Feb 13 20:15:48.151804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:15:48.151818 kernel: tsc: Detected 2000.000 MHz processor Feb 13 20:15:48.151832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:15:48.151846 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:15:48.151860 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:15:48.151870 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:15:48.151881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:15:48.151897 kernel: ACPI: Early table checksum verification disabled Feb 13 20:15:48.151910 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:15:48.151925 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.151938 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.151964 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.151982 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:15:48.152005 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.152015 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.152026 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.152041 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:48.152052 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:15:48.152063 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:15:48.152075 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:15:48.152086 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:15:48.152098 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:15:48.152112 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:15:48.152133 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:15:48.152144 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:15:48.152157 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:15:48.152172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:15:48.152204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:15:48.152225 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:15:48.152239 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:15:48.152258 kernel: Zone ranges: Feb 13 20:15:48.152273 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:15:48.152288 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:15:48.152304 kernel: Normal empty Feb 13 20:15:48.152320 kernel: Movable zone start for each node Feb 13 20:15:48.152336 kernel: Early memory node ranges Feb 13 20:15:48.152351 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:15:48.152366 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:15:48.152385 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:15:48.152403 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:15:48.152418 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:15:48.152437 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:15:48.152452 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:15:48.152467 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:15:48.152482 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:15:48.152498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:15:48.152514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:15:48.152528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:15:48.152546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:15:48.152561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:15:48.152576 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:15:48.152591 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:15:48.152605 kernel: TSC deadline timer available Feb 13 20:15:48.152620 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:15:48.152634 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:15:48.152649 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:15:48.152669 kernel: Booting paravirtualized kernel on KVM Feb 13 20:15:48.152684 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:15:48.152702 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:15:48.152717 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:15:48.152732 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:15:48.152746 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:15:48.152760 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:15:48.152777 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:48.152793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:15:48.152810 kernel: random: crng init done Feb 13 20:15:48.152825 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:15:48.152840 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:15:48.152854 kernel: Fallback order for Node 0: 0 Feb 13 20:15:48.152868 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:15:48.152883 kernel: Policy zone: DMA32 Feb 13 20:15:48.152897 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:15:48.152912 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125152K reserved, 0K cma-reserved) Feb 13 20:15:48.152927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:15:48.152945 kernel: Kernel/User page tables isolation: enabled Feb 13 20:15:48.152959 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:15:48.152973 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:15:48.152988 kernel: Dynamic Preempt: voluntary Feb 13 20:15:48.153002 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:15:48.153018 kernel: rcu: RCU event tracing is enabled. Feb 13 20:15:48.153032 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:15:48.153047 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:15:48.153062 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:15:48.153079 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:15:48.153092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:15:48.153104 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:15:48.153116 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:15:48.153128 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:15:48.153147 kernel: Console: colour VGA+ 80x25 Feb 13 20:15:48.153162 kernel: printk: console [tty0] enabled Feb 13 20:15:48.153191 kernel: printk: console [ttyS0] enabled Feb 13 20:15:48.153226 kernel: ACPI: Core revision 20230628 Feb 13 20:15:48.153240 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:15:48.153257 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:15:48.153270 kernel: x2apic enabled Feb 13 20:15:48.153284 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:15:48.153297 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:15:48.153313 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Feb 13 20:15:48.153328 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Feb 13 20:15:48.153384 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:15:48.153412 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:15:48.153442 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:15:48.153458 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:15:48.153474 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:15:48.153494 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:15:48.153510 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:15:48.153526 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:15:48.153542 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:15:48.153558 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:15:48.153574 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:15:48.153599 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:15:48.153615 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:15:48.153631 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:15:48.153648 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:15:48.153664 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:15:48.153680 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:15:48.153696 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:15:48.153712 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:15:48.153731 kernel: landlock: Up and running. Feb 13 20:15:48.153746 kernel: SELinux: Initializing. Feb 13 20:15:48.153762 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:48.153778 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:48.153793 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:15:48.153809 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:48.153825 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:48.153842 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:48.153857 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:15:48.153876 kernel: signal: max sigframe size: 1776 Feb 13 20:15:48.153892 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:15:48.153908 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:15:48.153924 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:15:48.153939 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:15:48.153955 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:15:48.153971 kernel: .... node #0, CPUs: #1 Feb 13 20:15:48.153986 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:15:48.154007 kernel: smpboot: Max logical packages: 1 Feb 13 20:15:48.154026 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Feb 13 20:15:48.154042 kernel: devtmpfs: initialized Feb 13 20:15:48.154058 kernel: x86/mm: Memory block size: 128MB Feb 13 20:15:48.154074 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:15:48.154090 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:15:48.154106 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:15:48.154121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:15:48.154137 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:15:48.154153 kernel: audit: type=2000 audit(1739477746.508:1): state=initialized audit_enabled=0 res=1 Feb 13 20:15:48.154171 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:15:48.154221 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:15:48.154237 kernel: cpuidle: using governor menu Feb 13 20:15:48.154252 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:15:48.154267 kernel: dca service started, version 1.12.1 Feb 13 20:15:48.154283 kernel: PCI: Using configuration type 1 for base access Feb 13 20:15:48.154299 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:15:48.154314 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:15:48.154330 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:15:48.154349 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:15:48.154364 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:15:48.154380 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:15:48.154395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:15:48.154411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:15:48.154426 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:15:48.154442 kernel: ACPI: Interpreter enabled Feb 13 20:15:48.154457 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:15:48.154473 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:15:48.154490 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:15:48.154505 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:15:48.154519 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:15:48.154535 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:15:48.154927 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:15:48.155095 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:15:48.155275 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:15:48.155302 kernel: acpiphp: Slot [3] registered Feb 13 20:15:48.155318 kernel: acpiphp: Slot [4] registered Feb 13 20:15:48.155334 kernel: acpiphp: Slot [5] registered Feb 13 20:15:48.155350 kernel: acpiphp: Slot [6] registered Feb 13 20:15:48.155366 kernel: acpiphp: Slot [7] registered Feb 13 20:15:48.155394 kernel: acpiphp: Slot [8] registered Feb 13 20:15:48.155410 kernel: acpiphp: Slot [9] registered Feb 13 20:15:48.155426 kernel: acpiphp: Slot [10] registered Feb 13 20:15:48.155442 kernel: acpiphp: Slot [11] registered Feb 13 20:15:48.155461 kernel: acpiphp: Slot [12] registered Feb 13 20:15:48.155477 kernel: acpiphp: Slot [13] registered Feb 13 20:15:48.155492 kernel: acpiphp: Slot [14] registered Feb 13 20:15:48.155508 kernel: acpiphp: Slot [15] registered Feb 13 20:15:48.155524 kernel: acpiphp: Slot [16] registered Feb 13 20:15:48.155539 kernel: acpiphp: Slot [17] registered Feb 13 20:15:48.155555 kernel: acpiphp: Slot [18] registered Feb 13 20:15:48.155571 kernel: acpiphp: Slot [19] registered Feb 13 20:15:48.155587 kernel: acpiphp: Slot [20] registered Feb 13 20:15:48.155601 kernel: acpiphp: Slot [21] registered Feb 13 20:15:48.155620 kernel: acpiphp: Slot [22] registered Feb 13 20:15:48.155634 kernel: acpiphp: Slot [23] registered Feb 13 20:15:48.155650 kernel: acpiphp: Slot [24] registered Feb 13 20:15:48.155665 kernel: acpiphp: Slot [25] registered Feb 13 20:15:48.155682 kernel: acpiphp: Slot [26] registered Feb 13 20:15:48.155697 kernel: acpiphp: Slot [27] registered Feb 13 20:15:48.155714 kernel: acpiphp: Slot [28] registered Feb 13 20:15:48.155730 kernel: acpiphp: Slot [29] registered Feb 13 20:15:48.155746 kernel: acpiphp: Slot [30] registered Feb 13 20:15:48.155765 kernel: acpiphp: Slot [31] registered Feb 13 20:15:48.155781 kernel: PCI host bridge to bus 0000:00 Feb 13 20:15:48.155977 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:15:48.156118 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:15:48.156334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:15:48.156560 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:15:48.156691 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:15:48.156818 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:15:48.157026 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:15:48.157262 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:15:48.157467 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:15:48.157614 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:15:48.157759 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:15:48.157902 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:15:48.158050 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:15:48.158227 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:15:48.158395 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:15:48.158539 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:15:48.158725 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:15:48.158873 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:15:48.159035 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:15:48.159279 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:15:48.159439 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:15:48.159593 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:15:48.159746 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:15:48.159898 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:15:48.160051 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:15:48.160274 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:48.160427 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:15:48.161331 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:15:48.161525 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:15:48.161709 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:48.161854 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:15:48.162009 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:15:48.162157 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:15:48.162350 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:15:48.162501 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:15:48.162664 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:15:48.162806 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:15:48.163009 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:48.163159 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:15:48.164566 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:15:48.164722 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:15:48.164894 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:48.165042 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:15:48.165202 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:15:48.165356 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:15:48.165554 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:15:48.165721 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:15:48.165870 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:15:48.165890 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:15:48.165907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:15:48.165923 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:15:48.165939 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:15:48.165955 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:15:48.165976 kernel: iommu: Default domain type: Translated Feb 13 20:15:48.165992 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:15:48.166008 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:15:48.166024 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:15:48.166040 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:15:48.166055 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:15:48.167366 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:15:48.167561 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:15:48.167716 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:15:48.167736 kernel: vgaarb: loaded Feb 13 20:15:48.167753 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:15:48.167769 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:15:48.167786 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:15:48.167801 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:15:48.167818 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:15:48.167834 kernel: pnp: PnP ACPI init Feb 13 20:15:48.167849 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:15:48.167869 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:15:48.167886 kernel: NET: Registered PF_INET protocol family Feb 13 20:15:48.167902 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:15:48.167918 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:15:48.167935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:15:48.167951 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:48.167967 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:15:48.167983 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:15:48.167999 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:48.168018 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:48.168034 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:15:48.168050 kernel: NET: Registered PF_XDP protocol family Feb 13 20:15:48.168208 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:15:48.169462 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:15:48.169606 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:15:48.169745 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:15:48.169880 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:15:48.170058 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:15:48.170232 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:15:48.170255 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:15:48.170831 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40050 usecs Feb 13 20:15:48.170865 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:15:48.170883 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:15:48.170900 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Feb 13 20:15:48.170916 kernel: Initialise system trusted keyrings Feb 13 20:15:48.170932 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:15:48.170957 kernel: Key type asymmetric registered Feb 13 20:15:48.170972 kernel: Asymmetric key parser 'x509' registered Feb 13 20:15:48.170988 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:15:48.171004 kernel: io scheduler mq-deadline registered Feb 13 20:15:48.171020 kernel: io scheduler kyber registered Feb 13 20:15:48.171035 kernel: io scheduler bfq registered Feb 13 20:15:48.171052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:15:48.171068 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:15:48.171085 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:15:48.171103 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:15:48.171119 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:15:48.171135 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:15:48.171151 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:15:48.171165 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:15:48.171198 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:15:48.171215 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:15:48.171446 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:15:48.171591 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:15:48.171724 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:15:47 UTC (1739477747) Feb 13 20:15:48.171859 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:15:48.171878 kernel: intel_pstate: CPU model not supported Feb 13 20:15:48.171894 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:15:48.171910 kernel: Segment Routing with IPv6 Feb 13 20:15:48.171926 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:15:48.171942 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:15:48.171962 kernel: Key type dns_resolver registered Feb 13 20:15:48.171978 kernel: IPI shorthand broadcast: enabled Feb 13 20:15:48.171993 kernel: sched_clock: Marking stable (1429008833, 185297190)->(1715580355, -101274332) Feb 13 20:15:48.172009 kernel: registered taskstats version 1 Feb 13 20:15:48.172026 kernel: Loading compiled-in X.509 certificates Feb 13 20:15:48.172042 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:15:48.172058 kernel: Key type .fscrypt registered Feb 13 20:15:48.172073 kernel: Key type fscrypt-provisioning registered Feb 13 20:15:48.172089 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:15:48.172108 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:15:48.172124 kernel: ima: No architecture policies found Feb 13 20:15:48.172138 kernel: clk: Disabling unused clocks Feb 13 20:15:48.172149 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:15:48.172162 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:15:48.174233 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:15:48.174271 kernel: Run /init as init process Feb 13 20:15:48.174288 kernel: with arguments: Feb 13 20:15:48.174306 kernel: /init Feb 13 20:15:48.174324 kernel: with environment: Feb 13 20:15:48.174338 kernel: HOME=/ Feb 13 20:15:48.174353 kernel: TERM=linux Feb 13 20:15:48.174367 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:15:48.174386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:48.174407 systemd[1]: Detected virtualization kvm. Feb 13 20:15:48.174429 systemd[1]: Detected architecture x86-64. Feb 13 20:15:48.174446 systemd[1]: Running in initrd. Feb 13 20:15:48.174466 systemd[1]: No hostname configured, using default hostname. Feb 13 20:15:48.174483 systemd[1]: Hostname set to . Feb 13 20:15:48.174501 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:15:48.174519 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:15:48.174536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:48.174554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:48.174595 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:15:48.174613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:48.174633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:15:48.174786 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:15:48.174806 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:15:48.174824 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:15:48.174841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:48.174859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:48.174876 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:48.174897 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:48.174915 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:48.174936 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:48.174953 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:48.174971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:48.174991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:48.175009 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:48.175027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:48.175045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:48.175062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:48.175080 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:48.175098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:15:48.175115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:48.175133 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:15:48.175153 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:15:48.175171 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:48.175273 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:48.175291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:48.175367 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:15:48.175426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:48.175444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:48.175462 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:15:48.175482 systemd-journald[183]: Journal started Feb 13 20:15:48.175524 systemd-journald[183]: Runtime Journal (/run/log/journal/8e8d1c1d82474a4d89950a23388c74fd) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:15:48.184219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:48.184382 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:15:48.240099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:15:48.240132 kernel: Bridge firewalling registered Feb 13 20:15:48.228108 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:15:48.251262 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:48.251621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:48.259485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:48.260561 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:48.274856 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:48.278442 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:48.281380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:48.284210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:48.323586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:48.329504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:48.332408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:48.333590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:48.340514 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:15:48.351690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:48.380351 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:15:48.387440 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:48.409804 systemd-resolved[217]: Positive Trust Anchors: Feb 13 20:15:48.409828 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:48.409864 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:48.414452 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 20:15:48.416211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:48.419595 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:48.579236 kernel: SCSI subsystem initialized Feb 13 20:15:48.599251 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:15:48.617238 kernel: iscsi: registered transport (tcp) Feb 13 20:15:48.651530 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:15:48.651638 kernel: QLogic iSCSI HBA Driver Feb 13 20:15:48.733042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:48.741598 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:15:48.793355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:15:48.793456 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:15:48.795704 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:15:48.857282 kernel: raid6: avx2x4 gen() 13863 MB/s Feb 13 20:15:48.875275 kernel: raid6: avx2x2 gen() 9287 MB/s Feb 13 20:15:48.893886 kernel: raid6: avx2x1 gen() 9908 MB/s Feb 13 20:15:48.894006 kernel: raid6: using algorithm avx2x4 gen() 13863 MB/s Feb 13 20:15:48.912869 kernel: raid6: .... xor() 5469 MB/s, rmw enabled Feb 13 20:15:48.912985 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:15:48.960363 kernel: xor: automatically using best checksumming function avx Feb 13 20:15:49.255273 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:15:49.278711 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:49.290595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:49.330852 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 20:15:49.337645 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:49.371552 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:15:49.425466 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:15:49.498397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:49.509575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:49.599631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:49.606818 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:15:49.643970 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:49.651570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:49.652848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:49.655160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:49.665813 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:15:49.706429 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:49.737033 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:15:49.820512 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:15:49.820937 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:15:49.820975 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:15:49.821226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:15:49.821241 kernel: GPT:9289727 != 125829119 Feb 13 20:15:49.821251 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:15:49.821262 kernel: GPT:9289727 != 125829119 Feb 13 20:15:49.821281 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:15:49.821292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:49.821303 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:15:49.846305 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Feb 13 20:15:49.875822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:49.875969 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:49.880679 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:49.881579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:49.888394 kernel: ACPI: bus type USB registered Feb 13 20:15:49.888455 kernel: usbcore: registered new interface driver usbfs Feb 13 20:15:49.881865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:49.884818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:49.900216 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:15:49.901955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:49.903582 kernel: AES CTR mode by8 optimization enabled Feb 13 20:15:49.913247 kernel: libata version 3.00 loaded. Feb 13 20:15:49.940288 kernel: usbcore: registered new interface driver hub Feb 13 20:15:49.940391 kernel: usbcore: registered new device driver usb Feb 13 20:15:49.971021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:15:49.981216 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Feb 13 20:15:50.019683 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:15:50.049478 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (445) Feb 13 20:15:50.049512 kernel: scsi host1: ata_piix Feb 13 20:15:50.049785 kernel: scsi host2: ata_piix Feb 13 20:15:50.049951 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:15:50.049965 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:15:50.068903 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:15:50.122170 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:15:50.122597 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:15:50.122965 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:15:50.123164 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:15:50.123382 kernel: hub 1-0:1.0: USB hub found Feb 13 20:15:50.123599 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:15:50.130426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:15:50.132390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:50.142807 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:15:50.144722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:15:50.153537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:15:50.163544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:50.173650 disk-uuid[529]: Primary Header is updated. Feb 13 20:15:50.173650 disk-uuid[529]: Secondary Entries is updated. Feb 13 20:15:50.173650 disk-uuid[529]: Secondary Header is updated. Feb 13 20:15:50.188226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:50.194233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:50.194496 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:51.204329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:51.206014 disk-uuid[531]: The operation has completed successfully. Feb 13 20:15:51.265590 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:15:51.265772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:15:51.277543 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:15:51.297427 sh[560]: Success Feb 13 20:15:51.339239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:15:51.497820 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:15:51.511935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:15:51.513794 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:15:51.568629 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:15:51.568748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:51.582790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:15:51.583479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:15:51.585983 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:15:51.609838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:15:51.612713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:15:51.623659 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:15:51.638540 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:15:51.676336 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:51.676447 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:51.676479 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:51.686222 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:51.706220 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:51.705726 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:15:51.725439 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:15:51.734937 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:15:51.857226 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:51.866536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:51.948329 systemd-networkd[743]: lo: Link UP Feb 13 20:15:51.949458 systemd-networkd[743]: lo: Gained carrier Feb 13 20:15:51.956624 ignition[661]: Ignition 2.19.0 Feb 13 20:15:51.957868 ignition[661]: Stage: fetch-offline Feb 13 20:15:51.958282 systemd-networkd[743]: Enumeration completed Feb 13 20:15:51.958047 ignition[661]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:51.958478 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:51.958070 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:51.959455 systemd[1]: Reached target network.target - Network. Feb 13 20:15:51.958389 ignition[661]: parsed url from cmdline: "" Feb 13 20:15:51.961030 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:15:51.958397 ignition[661]: no config URL provided Feb 13 20:15:51.961037 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:15:51.958404 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:51.962768 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:51.958415 ignition[661]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:51.962775 systemd-networkd[743]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:15:51.958422 ignition[661]: failed to fetch config: resource requires networking Feb 13 20:15:51.964051 systemd-networkd[743]: eth0: Link UP Feb 13 20:15:51.958922 ignition[661]: Ignition finished successfully Feb 13 20:15:51.964058 systemd-networkd[743]: eth0: Gained carrier Feb 13 20:15:51.964074 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:15:51.966404 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:51.970836 systemd-networkd[743]: eth1: Link UP Feb 13 20:15:51.970842 systemd-networkd[743]: eth1: Gained carrier Feb 13 20:15:51.970859 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:51.977510 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:15:51.986327 systemd-networkd[743]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Feb 13 20:15:51.993379 systemd-networkd[743]: eth0: DHCPv4 address 137.184.189.10/20, gateway 137.184.176.1 acquired from 169.254.169.253 Feb 13 20:15:52.015401 ignition[750]: Ignition 2.19.0 Feb 13 20:15:52.015423 ignition[750]: Stage: fetch Feb 13 20:15:52.015729 ignition[750]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:52.015742 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:52.015869 ignition[750]: parsed url from cmdline: "" Feb 13 20:15:52.015873 ignition[750]: no config URL provided Feb 13 20:15:52.015879 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:52.015889 ignition[750]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:52.015921 ignition[750]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:15:52.070506 ignition[750]: GET result: OK Feb 13 20:15:52.070846 ignition[750]: parsing config with SHA512: ef96a02be81414d5d6ffd7ac3cc85c0a7a8e691efb184929b6600c8d1c5f0c1a90ceeab2c84f5e1c214e6c07a6da3cfbced3caa25183eba911e2bdaf43168984 Feb 13 20:15:52.079075 unknown[750]: fetched base config from "system" Feb 13 20:15:52.079090 unknown[750]: fetched base config from "system" Feb 13 20:15:52.079102 unknown[750]: fetched user config from "digitalocean" Feb 13 20:15:52.084046 ignition[750]: fetch: fetch complete Feb 13 20:15:52.084067 ignition[750]: fetch: fetch passed Feb 13 20:15:52.084402 ignition[750]: Ignition finished successfully Feb 13 20:15:52.086678 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:15:52.095889 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:15:52.138850 ignition[758]: Ignition 2.19.0 Feb 13 20:15:52.139861 ignition[758]: Stage: kargs Feb 13 20:15:52.140203 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:52.140217 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:52.145033 ignition[758]: kargs: kargs passed Feb 13 20:15:52.145864 ignition[758]: Ignition finished successfully Feb 13 20:15:52.147613 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:15:52.161698 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:15:52.199766 ignition[764]: Ignition 2.19.0 Feb 13 20:15:52.199790 ignition[764]: Stage: disks Feb 13 20:15:52.200148 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:52.200172 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:52.201924 ignition[764]: disks: disks passed Feb 13 20:15:52.203663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:15:52.202026 ignition[764]: Ignition finished successfully Feb 13 20:15:52.212332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:52.213899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:15:52.216785 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:52.218697 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:52.220155 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:52.228604 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:15:52.268295 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:15:52.275475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:15:52.286492 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:15:52.468224 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:15:52.469772 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:15:52.472905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:52.484435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:52.491683 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:15:52.512205 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (781) Feb 13 20:15:52.512496 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:15:52.517849 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:52.520979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:52.521071 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:52.523962 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:15:52.526291 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:15:52.528141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:52.548819 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:15:52.552682 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:52.559693 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:15:52.563881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:52.657224 coreos-metadata[784]: Feb 13 20:15:52.656 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:52.668939 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:15:52.677999 coreos-metadata[784]: Feb 13 20:15:52.676 INFO Fetch successful Feb 13 20:15:52.685870 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:15:52.691725 coreos-metadata[783]: Feb 13 20:15:52.689 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:52.693546 coreos-metadata[784]: Feb 13 20:15:52.690 INFO wrote hostname ci-4081.3.1-d-1eeb8951e4 to /sysroot/etc/hostname Feb 13 20:15:52.692879 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:15:52.700625 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:15:52.705417 coreos-metadata[783]: Feb 13 20:15:52.705 INFO Fetch successful Feb 13 20:15:52.712405 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:15:52.723948 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:15:52.724113 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:15:52.925325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:52.932411 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:15:52.936491 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:15:52.950696 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:15:52.958796 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:53.004433 ignition[904]: INFO : Ignition 2.19.0 Feb 13 20:15:53.004419 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:15:53.007117 ignition[904]: INFO : Stage: mount Feb 13 20:15:53.008749 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:53.008749 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:53.011113 ignition[904]: INFO : mount: mount passed Feb 13 20:15:53.011113 ignition[904]: INFO : Ignition finished successfully Feb 13 20:15:53.012946 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:15:53.019465 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:15:53.054655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:53.068257 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Feb 13 20:15:53.071618 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:53.071730 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:53.073571 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:53.081260 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:53.085567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:53.120366 ignition[934]: INFO : Ignition 2.19.0 Feb 13 20:15:53.120366 ignition[934]: INFO : Stage: files Feb 13 20:15:53.124587 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:53.124587 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:53.124587 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:15:53.128279 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:15:53.128279 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:15:53.130873 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:15:53.131801 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:15:53.131801 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:15:53.131455 unknown[934]: wrote ssh authorized keys file for user: core Feb 13 20:15:53.135318 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:15:53.135318 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:15:53.135318 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:53.135318 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:15:53.184360 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:15:53.314544 systemd-networkd[743]: eth0: Gained IPv6LL Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:53.350226 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:15:53.378708 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:15:53.443577 systemd-networkd[743]: eth1: Gained IPv6LL Feb 13 20:15:53.542659 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:15:54.080385 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:15:54.082727 ignition[934]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:15:54.085529 ignition[934]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:15:54.088703 ignition[934]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:15:54.088703 ignition[934]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:15:54.088703 ignition[934]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:15:54.088703 ignition[934]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:54.098086 ignition[934]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:54.098086 ignition[934]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:15:54.098086 ignition[934]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:54.098086 ignition[934]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:54.098086 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:54.098086 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:54.098086 ignition[934]: INFO : files: files passed Feb 13 20:15:54.098086 ignition[934]: INFO : Ignition finished successfully Feb 13 20:15:54.099713 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:15:54.107642 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:15:54.115583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:15:54.124645 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:15:54.124850 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:15:54.153671 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:54.153671 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:54.156492 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:54.158281 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:54.160865 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:15:54.175857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:15:54.218507 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:15:54.219571 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:15:54.221984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:15:54.223419 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:15:54.225555 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:15:54.233576 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:15:54.260636 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:54.283211 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:15:54.301763 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:54.303990 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:54.305229 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:15:54.306950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:15:54.307249 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:54.311490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:15:54.312897 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:15:54.315006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:15:54.316222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:54.319604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:54.320923 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:15:54.321764 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:54.324602 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:15:54.326954 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:15:54.328630 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:15:54.329801 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:15:54.330157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:54.331943 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:54.333536 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:54.335133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:15:54.335282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:54.337112 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:15:54.337357 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:54.340592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:15:54.340864 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:54.342957 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:15:54.343145 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:15:54.344203 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:15:54.344396 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:15:54.360664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:15:54.361983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:15:54.362512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:54.367667 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:15:54.368442 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:15:54.368708 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:54.370511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:15:54.372487 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:54.383391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:15:54.383589 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:15:54.402266 ignition[987]: INFO : Ignition 2.19.0 Feb 13 20:15:54.402266 ignition[987]: INFO : Stage: umount Feb 13 20:15:54.406239 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:54.406239 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:54.410459 ignition[987]: INFO : umount: umount passed Feb 13 20:15:54.410459 ignition[987]: INFO : Ignition finished successfully Feb 13 20:15:54.413215 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:15:54.414414 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:15:54.416988 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:15:54.417093 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:15:54.419533 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:15:54.419644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:15:54.420680 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:15:54.420743 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:15:54.421420 systemd[1]: Stopped target network.target - Network. Feb 13 20:15:54.426765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:15:54.426911 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:54.429096 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:15:54.429780 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:15:54.433746 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:54.435292 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:15:54.437117 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:15:54.438504 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:15:54.438746 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:54.439711 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:15:54.439765 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:54.440837 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:15:54.440908 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:15:54.442253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:15:54.442317 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:54.444765 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:15:54.446130 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:15:54.457499 systemd-networkd[743]: eth0: DHCPv6 lease lost Feb 13 20:15:54.459662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:15:54.460620 systemd-networkd[743]: eth1: DHCPv6 lease lost Feb 13 20:15:54.465597 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:15:54.465826 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:15:54.468303 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:15:54.468603 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:15:54.472839 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:15:54.473772 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:15:54.477350 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:15:54.477449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:54.480626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:15:54.480853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:54.499657 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:15:54.501197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:15:54.501334 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:54.501997 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:15:54.502065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:54.502923 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:15:54.502985 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:54.503743 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:15:54.503819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:54.504637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:54.518928 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:15:54.520348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:54.522978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:15:54.523112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:54.524308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:15:54.524369 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:54.525778 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:15:54.525866 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:54.527926 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:15:54.528017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:54.530112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:54.530299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:54.534409 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:15:54.536598 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:15:54.536723 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:54.539473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:54.539583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:54.543587 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:15:54.543767 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:15:54.561405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:15:54.561602 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:15:54.563708 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:15:54.571581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:15:54.596068 systemd[1]: Switching root. Feb 13 20:15:54.699081 systemd-journald[183]: Journal stopped Feb 13 20:15:56.523472 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:15:56.523629 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:15:56.523666 kernel: SELinux: policy capability open_perms=1 Feb 13 20:15:56.523686 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:15:56.523705 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:15:56.523733 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:15:56.523759 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:15:56.523778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:15:56.523798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:15:56.523817 kernel: audit: type=1403 audit(1739477755.024:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:15:56.523856 systemd[1]: Successfully loaded SELinux policy in 48.333ms. Feb 13 20:15:56.523886 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.249ms. Feb 13 20:15:56.523907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:56.523929 systemd[1]: Detected virtualization kvm. Feb 13 20:15:56.523948 systemd[1]: Detected architecture x86-64. Feb 13 20:15:56.523966 systemd[1]: Detected first boot. Feb 13 20:15:56.523987 systemd[1]: Hostname set to . Feb 13 20:15:56.524007 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:15:56.524033 zram_generator::config[1050]: No configuration found. Feb 13 20:15:56.524055 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:15:56.524076 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:15:56.524096 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:15:56.524120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:15:56.524141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:15:56.524160 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:15:56.530019 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:15:56.531273 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:15:56.531338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:15:56.531360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:15:56.531392 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:15:56.531410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:56.531430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:56.531451 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:15:56.531469 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:15:56.531489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:15:56.531519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:56.531537 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:15:56.531555 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:56.531573 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:15:56.531590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:56.531609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:56.531628 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:56.531655 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:56.531674 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:15:56.531695 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:15:56.531715 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:56.531732 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:56.531749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:56.531767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:56.531783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:56.531802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:15:56.531846 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:15:56.531865 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:15:56.531886 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:15:56.531903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:56.531934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:15:56.531951 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:15:56.531969 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:15:56.532017 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:15:56.532036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:56.532060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:56.532077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:15:56.532095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:56.532114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:15:56.532132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:56.532150 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:15:56.532167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:56.532227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:15:56.532255 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:15:56.532279 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:15:56.532300 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:56.532317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:56.532334 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:15:56.532351 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:15:56.532368 kernel: fuse: init (API version 7.39) Feb 13 20:15:56.532387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:56.532406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:56.532434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:15:56.532454 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:15:56.532475 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:15:56.532494 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:15:56.532515 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:15:56.532533 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:15:56.532552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:56.532571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:15:56.532596 kernel: loop: module loaded Feb 13 20:15:56.532614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:15:56.532631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:56.532649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:56.532667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:56.532692 kernel: ACPI: bus type drm_connector registered Feb 13 20:15:56.532709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:56.532727 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:15:56.532745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:15:56.532761 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:15:56.532780 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:15:56.532799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:15:56.532822 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:56.532851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:56.532877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:56.532896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:15:56.534079 systemd-journald[1137]: Collecting audit messages is disabled. Feb 13 20:15:56.534228 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:15:56.534270 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:15:56.534287 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:15:56.534314 systemd-journald[1137]: Journal started Feb 13 20:15:56.534354 systemd-journald[1137]: Runtime Journal (/run/log/journal/8e8d1c1d82474a4d89950a23388c74fd) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:15:56.551350 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:15:56.562380 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:15:56.579320 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:15:56.591229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:56.606353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:15:56.618337 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:56.638069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:56.665300 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:56.682579 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:56.681702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:15:56.683536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:15:56.747350 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:15:56.767121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:56.777402 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:15:56.799766 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:15:56.822576 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 20:15:56.822611 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 20:15:56.853503 systemd-journald[1137]: Time spent on flushing to /var/log/journal/8e8d1c1d82474a4d89950a23388c74fd is 60.169ms for 979 entries. Feb 13 20:15:56.853503 systemd-journald[1137]: System Journal (/var/log/journal/8e8d1c1d82474a4d89950a23388c74fd) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:15:56.944347 systemd-journald[1137]: Received client request to flush runtime journal. Feb 13 20:15:56.859621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:56.878638 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:15:56.915891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:56.924622 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:15:56.953122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:15:56.968468 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:15:57.067508 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:15:57.085561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:57.166808 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Feb 13 20:15:57.166846 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Feb 13 20:15:57.184456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:58.156794 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:15:58.184820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:58.240973 systemd-udevd[1217]: Using default interface naming scheme 'v255'. Feb 13 20:15:58.288748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:58.305469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:58.366709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:15:58.451746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:58.452339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:58.459792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:58.471469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:58.485507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:58.488348 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:15:58.488428 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:15:58.488505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:58.489098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:58.495471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:58.509334 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:58.509670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:58.511815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:58.539786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:58.547672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:58.555377 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:15:58.584149 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 20:15:58.592464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:58.663237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1229) Feb 13 20:15:58.771918 systemd-networkd[1224]: lo: Link UP Feb 13 20:15:58.772605 systemd-networkd[1224]: lo: Gained carrier Feb 13 20:15:58.778803 systemd-networkd[1224]: Enumeration completed Feb 13 20:15:58.779401 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:58.779482 systemd-networkd[1224]: eth0: Configuring with /run/systemd/network/10-0e:1f:0e:d8:f8:eb.network. Feb 13 20:15:58.780996 systemd-networkd[1224]: eth1: Configuring with /run/systemd/network/10-8a:34:8b:7d:58:b7.network. Feb 13 20:15:58.784138 systemd-networkd[1224]: eth0: Link UP Feb 13 20:15:58.784567 systemd-networkd[1224]: eth0: Gained carrier Feb 13 20:15:58.790871 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:15:58.792754 systemd-networkd[1224]: eth1: Link UP Feb 13 20:15:58.792945 systemd-networkd[1224]: eth1: Gained carrier Feb 13 20:15:58.831703 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:15:58.860224 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:15:58.867866 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:15:58.915246 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:15:58.956108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:15:59.007829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:59.039224 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:15:59.086282 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:15:59.086432 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:15:59.092366 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:15:59.093379 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:15:59.093458 kernel: [drm] features: -context_init Feb 13 20:15:59.099234 kernel: [drm] number of scanouts: 1 Feb 13 20:15:59.102555 kernel: [drm] number of cap sets: 0 Feb 13 20:15:59.104427 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:15:59.120980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:59.136681 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:15:59.136857 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:15:59.126688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:59.159401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:59.164434 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:15:59.265015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:59.266229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:59.304595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:59.477168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:59.576931 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:15:59.653057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:15:59.684608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:15:59.721530 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:59.771783 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:15:59.775138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:59.793032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:15:59.830336 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:59.885711 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:15:59.893576 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:15:59.912500 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:15:59.915393 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:15:59.915491 systemd[1]: Reached target machines.target - Containers. Feb 13 20:15:59.921624 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:15:59.967429 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:15:59.973812 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:15:59.979125 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:15:59.981696 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:59.986445 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:00.008838 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:00.017546 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:00.019228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:00.026670 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:16:00.060480 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:16:00.073047 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:16:00.120272 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:16:00.158414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:16:00.167595 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:16:00.236541 systemd-networkd[1224]: eth1: Gained IPv6LL Feb 13 20:16:00.249327 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:16:00.258261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:16:00.330615 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 20:16:00.422524 systemd-networkd[1224]: eth0: Gained IPv6LL Feb 13 20:16:00.494242 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 20:16:00.599003 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 20:16:00.669682 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:16:00.675578 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 20:16:00.708726 kernel: loop6: detected capacity change from 0 to 142488 Feb 13 20:16:00.743794 kernel: loop7: detected capacity change from 0 to 210664 Feb 13 20:16:00.793309 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:16:00.797863 (sd-merge)[1313]: Merged extensions into '/usr'. Feb 13 20:16:00.815530 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:16:00.816160 systemd[1]: Reloading... Feb 13 20:16:01.072985 zram_generator::config[1337]: No configuration found. Feb 13 20:16:01.587548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:01.765479 systemd[1]: Reloading finished in 948 ms. Feb 13 20:16:01.801756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:16:01.832156 systemd[1]: Starting ensure-sysext.service... Feb 13 20:16:01.871138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:01.924956 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:16:01.924999 systemd[1]: Reloading... Feb 13 20:16:01.944476 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:16:01.945141 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:16:01.947048 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:16:01.947596 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Feb 13 20:16:01.947705 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Feb 13 20:16:01.965914 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:01.965938 systemd-tmpfiles[1389]: Skipping /boot Feb 13 20:16:02.010392 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:02.011122 systemd-tmpfiles[1389]: Skipping /boot Feb 13 20:16:02.176430 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:16:02.236441 zram_generator::config[1423]: No configuration found. Feb 13 20:16:02.534157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:02.702999 systemd[1]: Reloading finished in 773 ms. Feb 13 20:16:02.743072 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:16:02.759789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:02.796681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:02.804726 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:16:02.852272 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:16:02.890821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:02.900763 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:16:02.924002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:02.932731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:02.954141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:02.976936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:03.008337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:03.011292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:03.011578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:03.035160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:03.035592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:03.054529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:03.054917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:03.072154 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:03.072865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:03.094318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:03.094920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:03.107556 augenrules[1501]: No rules Feb 13 20:16:03.108421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:03.125072 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:03.157045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:03.183754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:03.210044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:03.210452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:03.226373 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:03.229847 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:16:03.238135 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:16:03.244700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:03.245016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:03.253406 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:03.253724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:03.262290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:03.262639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:03.271084 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:16:03.277734 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:03.282724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:03.315966 systemd[1]: Finished ensure-sysext.service. Feb 13 20:16:03.343455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:03.343592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:03.355725 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:16:03.364580 systemd-resolved[1484]: Positive Trust Anchors: Feb 13 20:16:03.364610 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:03.364659 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:03.373755 systemd-resolved[1484]: Using system hostname 'ci-4081.3.1-d-1eeb8951e4'. Feb 13 20:16:03.377590 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:16:03.382027 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:03.383454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:03.388407 systemd[1]: Reached target network.target - Network. Feb 13 20:16:03.389044 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:16:03.389519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:03.416937 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:16:03.523038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:16:03.528977 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:03.529945 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:16:03.531098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:16:03.531772 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:16:03.532487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:16:03.532678 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:03.533406 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:16:03.534278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:16:03.535207 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:16:03.535982 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:03.541292 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:16:03.545981 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:16:03.553710 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:16:03.563727 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:16:03.565736 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:03.569376 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:03.570549 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:16:03.570671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:03.570716 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:03.579537 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:16:03.599556 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:16:03.619716 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:16:03.642618 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:16:03.660643 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:16:03.663870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:16:03.717369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:03.728656 dbus-daemon[1540]: [system] SELinux support is enabled Feb 13 20:16:03.734596 jq[1541]: false Feb 13 20:16:03.739440 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:16:03.755875 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:16:03.779333 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:16:04.318255 systemd-timesyncd[1530]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Feb 13 20:16:04.318369 systemd-timesyncd[1530]: Initial clock synchronization to Thu 2025-02-13 20:16:04.315711 UTC. Feb 13 20:16:04.318954 systemd-resolved[1484]: Clock change detected. Flushing caches. Feb 13 20:16:04.348639 coreos-metadata[1538]: Feb 13 20:16:04.333 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:04.330709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:16:04.356272 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:16:04.367220 coreos-metadata[1538]: Feb 13 20:16:04.365 INFO Fetch successful Feb 13 20:16:04.384679 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:16:04.399566 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:16:04.430120 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:16:04.446647 extend-filesystems[1542]: Found loop4 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found loop5 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found loop6 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found loop7 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda1 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda2 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda3 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found usr Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda4 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda6 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda7 Feb 13 20:16:04.446647 extend-filesystems[1542]: Found vda9 Feb 13 20:16:04.446647 extend-filesystems[1542]: Checking size of /dev/vda9 Feb 13 20:16:04.753612 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:16:04.753920 extend-filesystems[1542]: Resized partition /dev/vda9 Feb 13 20:16:04.457041 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:16:04.785011 extend-filesystems[1587]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:16:04.491446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:16:04.800787 jq[1568]: true Feb 13 20:16:04.801159 update_engine[1565]: I20250213 20:16:04.790612 1565 main.cc:92] Flatcar Update Engine starting Feb 13 20:16:04.554936 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:16:04.555426 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:16:04.567787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:16:04.803005 update_engine[1565]: I20250213 20:16:04.802371 1565 update_check_scheduler.cc:74] Next update check in 5m44s Feb 13 20:16:04.621392 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:16:04.622893 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:16:04.803429 jq[1584]: true Feb 13 20:16:04.654811 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:16:04.669933 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:16:04.778698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:16:04.778757 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:16:04.784917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:16:04.785045 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:16:04.785080 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:16:04.796638 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:16:04.849545 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:16:04.877802 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:16:04.880136 tar[1582]: linux-amd64/helm Feb 13 20:16:04.881759 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:16:04.886441 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:16:04.893244 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:16:05.029355 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1611) Feb 13 20:16:05.076285 systemd-logind[1558]: New seat seat0. Feb 13 20:16:05.137955 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:16:05.224765 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:16:05.224851 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:16:05.225465 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:16:05.244639 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:16:05.244639 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:16:05.244639 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:16:05.283613 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Feb 13 20:16:05.283613 extend-filesystems[1542]: Found vdb Feb 13 20:16:05.271552 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:16:05.273727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:16:05.332177 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:05.315555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:16:05.347716 systemd[1]: Starting sshkeys.service... Feb 13 20:16:05.444735 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:16:05.459674 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:16:05.656736 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:16:05.708885 coreos-metadata[1644]: Feb 13 20:16:05.707 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:05.737644 coreos-metadata[1644]: Feb 13 20:16:05.737 INFO Fetch successful Feb 13 20:16:05.799210 unknown[1644]: wrote ssh authorized keys file for user: core Feb 13 20:16:05.940236 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:05.936364 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:16:05.966582 systemd[1]: Finished sshkeys.service. Feb 13 20:16:06.077910 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:16:06.273202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:16:06.294998 containerd[1590]: time="2025-02-13T20:16:06.294462635Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:16:06.304909 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:16:06.373223 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:16:06.373814 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:16:06.420619 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:16:06.434001 containerd[1590]: time="2025-02-13T20:16:06.433443564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.439858 containerd[1590]: time="2025-02-13T20:16:06.439424679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:06.439858 containerd[1590]: time="2025-02-13T20:16:06.439535213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:16:06.439858 containerd[1590]: time="2025-02-13T20:16:06.439571049Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.440863417Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.440924428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441054971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441074244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441470660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441502045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441522119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441539055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.441656262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.442873 containerd[1590]: time="2025-02-13T20:16:06.442024043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:06.444118 containerd[1590]: time="2025-02-13T20:16:06.444050041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:06.444118 containerd[1590]: time="2025-02-13T20:16:06.444111741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:16:06.444427 containerd[1590]: time="2025-02-13T20:16:06.444388538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:16:06.444521 containerd[1590]: time="2025-02-13T20:16:06.444497700Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456303157Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456419608Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456448050Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456528181Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456567925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:16:06.457868 containerd[1590]: time="2025-02-13T20:16:06.456806751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.459815775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460143390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460179181Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460194446Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460211297Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460240815Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460258314Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460275413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460292067Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460306377Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460321328Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460338189Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460362102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.462287 containerd[1590]: time="2025-02-13T20:16:06.460378372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460393172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460421698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460437604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460539883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460557213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460572541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460586418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460602536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460618719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460635057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460648172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460666861Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460707679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460723877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463152 containerd[1590]: time="2025-02-13T20:16:06.460734947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.460792627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.460821593Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461917601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461945735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461956801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461973693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461987013Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:16:06.463615 containerd[1590]: time="2025-02-13T20:16:06.461999222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.462362769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.462466706Z" level=info msg="Connect containerd service" Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.462571099Z" level=info msg="using legacy CRI server" Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.462585707Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.462728018Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:16:06.466153 containerd[1590]: time="2025-02-13T20:16:06.463666980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469002007Z" level=info msg="Start subscribing containerd event" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469115109Z" level=info msg="Start recovering state" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469267889Z" level=info msg="Start event monitor" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469292058Z" level=info msg="Start snapshots syncer" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469309862Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.469320710Z" level=info msg="Start streaming server" Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.470596748Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.470668779Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:16:06.472978 containerd[1590]: time="2025-02-13T20:16:06.470769343Z" level=info msg="containerd successfully booted in 0.180660s" Feb 13 20:16:06.469596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:16:06.475645 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:16:06.495400 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:16:06.524957 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:16:06.530148 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:16:07.397098 tar[1582]: linux-amd64/LICENSE Feb 13 20:16:07.397873 tar[1582]: linux-amd64/README.md Feb 13 20:16:07.418329 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:16:07.809275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:07.815505 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:07.815976 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:16:07.820517 systemd[1]: Startup finished in 8.841s (kernel) + 12.321s (userspace) = 21.163s. Feb 13 20:16:08.927944 kubelet[1696]: E0213 20:16:08.924627 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:08.935638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:08.935935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:11.980382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:16:11.994402 systemd[1]: Started sshd@0-137.184.189.10:22-147.75.109.163:59210.service - OpenSSH per-connection server daemon (147.75.109.163:59210). Feb 13 20:16:12.123883 sshd[1709]: Accepted publickey for core from 147.75.109.163 port 59210 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:12.127310 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:12.142462 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:16:12.150424 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:16:12.155090 systemd-logind[1558]: New session 1 of user core. Feb 13 20:16:12.189564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:16:12.201456 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:16:12.209479 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:16:12.378178 systemd[1715]: Queued start job for default target default.target. Feb 13 20:16:12.380317 systemd[1715]: Created slice app.slice - User Application Slice. Feb 13 20:16:12.380577 systemd[1715]: Reached target paths.target - Paths. Feb 13 20:16:12.380680 systemd[1715]: Reached target timers.target - Timers. Feb 13 20:16:12.407296 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:16:12.418636 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:16:12.419904 systemd[1715]: Reached target sockets.target - Sockets. Feb 13 20:16:12.420045 systemd[1715]: Reached target basic.target - Basic System. Feb 13 20:16:12.420130 systemd[1715]: Reached target default.target - Main User Target. Feb 13 20:16:12.420174 systemd[1715]: Startup finished in 200ms. Feb 13 20:16:12.421093 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:16:12.432638 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:16:12.516734 systemd[1]: Started sshd@1-137.184.189.10:22-147.75.109.163:59214.service - OpenSSH per-connection server daemon (147.75.109.163:59214). Feb 13 20:16:12.586042 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 59214 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:12.588782 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:12.608819 systemd-logind[1558]: New session 2 of user core. Feb 13 20:16:12.617639 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:16:12.702467 sshd[1727]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:12.732096 systemd[1]: Started sshd@2-137.184.189.10:22-147.75.109.163:59224.service - OpenSSH per-connection server daemon (147.75.109.163:59224). Feb 13 20:16:12.733068 systemd[1]: sshd@1-137.184.189.10:22-147.75.109.163:59214.service: Deactivated successfully. Feb 13 20:16:12.767082 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:16:12.787718 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:16:12.796172 systemd-logind[1558]: Removed session 2. Feb 13 20:16:12.831679 sshd[1732]: Accepted publickey for core from 147.75.109.163 port 59224 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:12.834532 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:12.845823 systemd-logind[1558]: New session 3 of user core. Feb 13 20:16:12.857477 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:16:12.934581 sshd[1732]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:12.950538 systemd[1]: Started sshd@3-137.184.189.10:22-147.75.109.163:59240.service - OpenSSH per-connection server daemon (147.75.109.163:59240). Feb 13 20:16:12.951598 systemd[1]: sshd@2-137.184.189.10:22-147.75.109.163:59224.service: Deactivated successfully. Feb 13 20:16:12.960380 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:16:12.963996 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:16:12.966435 systemd-logind[1558]: Removed session 3. Feb 13 20:16:13.012883 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 59240 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:13.016111 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:13.030809 systemd-logind[1558]: New session 4 of user core. Feb 13 20:16:13.034503 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:16:13.107633 sshd[1740]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:13.119539 systemd[1]: Started sshd@4-137.184.189.10:22-147.75.109.163:59252.service - OpenSSH per-connection server daemon (147.75.109.163:59252). Feb 13 20:16:13.120572 systemd[1]: sshd@3-137.184.189.10:22-147.75.109.163:59240.service: Deactivated successfully. Feb 13 20:16:13.126160 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:16:13.130157 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:16:13.133262 systemd-logind[1558]: Removed session 4. Feb 13 20:16:13.184948 sshd[1748]: Accepted publickey for core from 147.75.109.163 port 59252 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:13.186676 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:13.200944 systemd-logind[1558]: New session 5 of user core. Feb 13 20:16:13.213692 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:16:13.298751 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:16:13.299375 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:13.332561 sudo[1755]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:13.338476 sshd[1748]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:13.351515 systemd[1]: Started sshd@5-137.184.189.10:22-147.75.109.163:59264.service - OpenSSH per-connection server daemon (147.75.109.163:59264). Feb 13 20:16:13.353084 systemd[1]: sshd@4-137.184.189.10:22-147.75.109.163:59252.service: Deactivated successfully. Feb 13 20:16:13.365213 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:16:13.369698 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:16:13.374594 systemd-logind[1558]: Removed session 5. Feb 13 20:16:13.431630 sshd[1757]: Accepted publickey for core from 147.75.109.163 port 59264 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:13.434495 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:13.451646 systemd-logind[1558]: New session 6 of user core. Feb 13 20:16:13.459798 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:16:13.544249 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:16:13.544708 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:13.552731 sudo[1765]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:13.565419 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:16:13.566413 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:13.594375 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:13.598764 auditctl[1768]: No rules Feb 13 20:16:13.600855 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:16:13.601908 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:13.609692 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:13.653775 augenrules[1787]: No rules Feb 13 20:16:13.657618 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:13.662038 sudo[1764]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:13.667363 sshd[1757]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:13.678519 systemd[1]: Started sshd@6-137.184.189.10:22-147.75.109.163:59270.service - OpenSSH per-connection server daemon (147.75.109.163:59270). Feb 13 20:16:13.679967 systemd[1]: sshd@5-137.184.189.10:22-147.75.109.163:59264.service: Deactivated successfully. Feb 13 20:16:13.684432 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:16:13.685588 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:16:13.688522 systemd-logind[1558]: Removed session 6. Feb 13 20:16:13.731875 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 59270 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:13.732931 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:13.740919 systemd-logind[1558]: New session 7 of user core. Feb 13 20:16:13.752435 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:16:13.822549 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:16:13.823732 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:14.506706 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:16:14.507820 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:16:15.127348 dockerd[1815]: time="2025-02-13T20:16:15.127251979Z" level=info msg="Starting up" Feb 13 20:16:15.491048 dockerd[1815]: time="2025-02-13T20:16:15.490303461Z" level=info msg="Loading containers: start." Feb 13 20:16:15.731883 kernel: Initializing XFRM netlink socket Feb 13 20:16:15.891169 systemd-networkd[1224]: docker0: Link UP Feb 13 20:16:15.933593 dockerd[1815]: time="2025-02-13T20:16:15.932366777Z" level=info msg="Loading containers: done." Feb 13 20:16:15.962642 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3207661960-merged.mount: Deactivated successfully. Feb 13 20:16:15.971220 dockerd[1815]: time="2025-02-13T20:16:15.971060591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:16:15.971491 dockerd[1815]: time="2025-02-13T20:16:15.971334209Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:16:15.971567 dockerd[1815]: time="2025-02-13T20:16:15.971537853Z" level=info msg="Daemon has completed initialization" Feb 13 20:16:16.031963 dockerd[1815]: time="2025-02-13T20:16:16.031022610Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:16:16.031461 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:16:17.371379 containerd[1590]: time="2025-02-13T20:16:17.370959630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:16:18.153166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264551449.mount: Deactivated successfully. Feb 13 20:16:19.083406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:19.097336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:19.371120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:19.380710 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:19.508899 kubelet[2035]: E0213 20:16:19.508299 2035 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:19.519159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:19.519464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:20.785414 containerd[1590]: time="2025-02-13T20:16:20.783433434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:20.788216 containerd[1590]: time="2025-02-13T20:16:20.788054424Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:16:20.799183 containerd[1590]: time="2025-02-13T20:16:20.793412538Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:20.803078 containerd[1590]: time="2025-02-13T20:16:20.801197983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:20.806188 containerd[1590]: time="2025-02-13T20:16:20.806105865Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.435070726s" Feb 13 20:16:20.806188 containerd[1590]: time="2025-02-13T20:16:20.806188872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:16:20.849543 containerd[1590]: time="2025-02-13T20:16:20.849486461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:16:24.498867 containerd[1590]: time="2025-02-13T20:16:24.497190945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:24.518012 containerd[1590]: time="2025-02-13T20:16:24.517874657Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:16:24.531946 containerd[1590]: time="2025-02-13T20:16:24.531861926Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:24.538657 containerd[1590]: time="2025-02-13T20:16:24.538560325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:24.543072 containerd[1590]: time="2025-02-13T20:16:24.542978729Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 3.69316325s" Feb 13 20:16:24.543429 containerd[1590]: time="2025-02-13T20:16:24.543393359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:16:24.609143 containerd[1590]: time="2025-02-13T20:16:24.608948255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:16:24.651967 systemd-resolved[1484]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:16:27.041225 containerd[1590]: time="2025-02-13T20:16:27.039426445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.049086 containerd[1590]: time="2025-02-13T20:16:27.048774475Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:16:27.052079 containerd[1590]: time="2025-02-13T20:16:27.050043324Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.062148 containerd[1590]: time="2025-02-13T20:16:27.062026409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.065146 containerd[1590]: time="2025-02-13T20:16:27.064292869Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 2.454616201s" Feb 13 20:16:27.065146 containerd[1590]: time="2025-02-13T20:16:27.064366837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:16:27.145796 containerd[1590]: time="2025-02-13T20:16:27.142089365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:16:27.757400 systemd-resolved[1484]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:16:28.801645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199000613.mount: Deactivated successfully. Feb 13 20:16:29.583997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:16:29.598999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:29.895247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:29.898065 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:30.076005 containerd[1590]: time="2025-02-13T20:16:30.075920296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.079163 containerd[1590]: time="2025-02-13T20:16:30.079028288Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:16:30.080275 containerd[1590]: time="2025-02-13T20:16:30.080217425Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.091939 kubelet[2083]: E0213 20:16:30.080397 2083 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:30.087746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:30.092358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:30.105252 containerd[1590]: time="2025-02-13T20:16:30.105151209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.106434 containerd[1590]: time="2025-02-13T20:16:30.106297088Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.964134606s" Feb 13 20:16:30.106867 containerd[1590]: time="2025-02-13T20:16:30.106804884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:16:30.159464 containerd[1590]: time="2025-02-13T20:16:30.159241882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:16:30.886093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908025690.mount: Deactivated successfully. Feb 13 20:16:32.937299 containerd[1590]: time="2025-02-13T20:16:32.937149475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:32.939866 containerd[1590]: time="2025-02-13T20:16:32.939737613Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:16:32.941409 containerd[1590]: time="2025-02-13T20:16:32.941294494Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:32.949220 containerd[1590]: time="2025-02-13T20:16:32.949135451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:32.952461 containerd[1590]: time="2025-02-13T20:16:32.951994880Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.79269684s" Feb 13 20:16:32.952461 containerd[1590]: time="2025-02-13T20:16:32.952282293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:16:32.989476 containerd[1590]: time="2025-02-13T20:16:32.989411603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:16:32.995030 systemd-resolved[1484]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 20:16:33.561242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994569632.mount: Deactivated successfully. Feb 13 20:16:33.568635 containerd[1590]: time="2025-02-13T20:16:33.568518116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.571008 containerd[1590]: time="2025-02-13T20:16:33.570539361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:16:33.572783 containerd[1590]: time="2025-02-13T20:16:33.572282329Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.577269 containerd[1590]: time="2025-02-13T20:16:33.577191022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.578806 containerd[1590]: time="2025-02-13T20:16:33.578740689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 589.265444ms" Feb 13 20:16:33.579171 containerd[1590]: time="2025-02-13T20:16:33.579010741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:16:33.618276 containerd[1590]: time="2025-02-13T20:16:33.617951851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:16:34.272799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248823659.mount: Deactivated successfully. Feb 13 20:16:37.470896 containerd[1590]: time="2025-02-13T20:16:37.465992439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.474997 containerd[1590]: time="2025-02-13T20:16:37.474871061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:16:37.477869 containerd[1590]: time="2025-02-13T20:16:37.477137791Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.484455 containerd[1590]: time="2025-02-13T20:16:37.484375286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.488958 containerd[1590]: time="2025-02-13T20:16:37.488079427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.870018515s" Feb 13 20:16:37.488958 containerd[1590]: time="2025-02-13T20:16:37.488198306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:16:40.332603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:16:40.340262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:40.551363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:40.567971 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:40.665209 kubelet[2271]: E0213 20:16:40.665028 2271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:40.671234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:40.671761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:41.506025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:41.528820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:41.595117 systemd[1]: Reloading requested from client PID 2288 ('systemctl') (unit session-7.scope)... Feb 13 20:16:41.595142 systemd[1]: Reloading... Feb 13 20:16:41.802884 zram_generator::config[2331]: No configuration found. Feb 13 20:16:42.022440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:42.113119 systemd[1]: Reloading finished in 516 ms. Feb 13 20:16:42.202333 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:16:42.202814 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:16:42.203654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:42.208457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:42.446510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:42.447492 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:42.548258 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:42.548258 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:42.548258 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:42.552081 kubelet[2393]: I0213 20:16:42.551252 2393 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:16:43.197333 kubelet[2393]: I0213 20:16:43.197260 2393 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:16:43.197945 kubelet[2393]: I0213 20:16:43.197584 2393 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:16:43.198297 kubelet[2393]: I0213 20:16:43.198272 2393 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:16:43.248026 kubelet[2393]: I0213 20:16:43.247741 2393 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:43.249506 kubelet[2393]: E0213 20:16:43.249406 2393 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.189.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.268063 kubelet[2393]: I0213 20:16:43.267971 2393 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:16:43.268659 kubelet[2393]: I0213 20:16:43.268614 2393 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:16:43.269060 kubelet[2393]: I0213 20:16:43.268659 2393 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-d-1eeb8951e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:16:43.269060 kubelet[2393]: I0213 20:16:43.269069 2393 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:16:43.269359 kubelet[2393]: I0213 20:16:43.269086 2393 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:16:43.269359 kubelet[2393]: I0213 20:16:43.269269 2393 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:43.271896 kubelet[2393]: I0213 20:16:43.271482 2393 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:16:43.271896 kubelet[2393]: I0213 20:16:43.271532 2393 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:16:43.271896 kubelet[2393]: I0213 20:16:43.271568 2393 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:16:43.271896 kubelet[2393]: I0213 20:16:43.271598 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:16:43.280302 kubelet[2393]: W0213 20:16:43.279581 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-d-1eeb8951e4&limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.280302 kubelet[2393]: E0213 20:16:43.279779 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.189.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-d-1eeb8951e4&limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.280302 kubelet[2393]: W0213 20:16:43.279894 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.280302 kubelet[2393]: E0213 20:16:43.279931 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.189.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.282019 kubelet[2393]: I0213 20:16:43.281214 2393 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:16:43.283461 kubelet[2393]: I0213 20:16:43.283410 2393 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:16:43.283582 kubelet[2393]: W0213 20:16:43.283526 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:16:43.284621 kubelet[2393]: I0213 20:16:43.284585 2393 server.go:1264] "Started kubelet" Feb 13 20:16:43.286603 kubelet[2393]: I0213 20:16:43.286542 2393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:16:43.288323 kubelet[2393]: I0213 20:16:43.288281 2393 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:16:43.291647 kubelet[2393]: I0213 20:16:43.291384 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:16:43.292891 kubelet[2393]: I0213 20:16:43.292088 2393 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:16:43.292891 kubelet[2393]: E0213 20:16:43.292322 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.189.10:6443/api/v1/namespaces/default/events\": dial tcp 137.184.189.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-d-1eeb8951e4.1823ddd88f73b70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-d-1eeb8951e4,UID:ci-4081.3.1-d-1eeb8951e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-d-1eeb8951e4,},FirstTimestamp:2025-02-13 20:16:43.284543242 +0000 UTC m=+0.829952122,LastTimestamp:2025-02-13 20:16:43.284543242 +0000 UTC m=+0.829952122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-d-1eeb8951e4,}" Feb 13 20:16:43.295611 kubelet[2393]: I0213 20:16:43.295322 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:16:43.299330 kubelet[2393]: E0213 20:16:43.298178 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-d-1eeb8951e4\" not found" Feb 13 20:16:43.299330 kubelet[2393]: I0213 20:16:43.298276 2393 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:16:43.299330 kubelet[2393]: I0213 20:16:43.298404 2393 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:16:43.299330 kubelet[2393]: I0213 20:16:43.298495 2393 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:16:43.299330 kubelet[2393]: W0213 20:16:43.299007 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.299330 kubelet[2393]: E0213 20:16:43.299074 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.189.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.299330 kubelet[2393]: E0213 20:16:43.299317 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-d-1eeb8951e4?timeout=10s\": dial tcp 137.184.189.10:6443: connect: connection refused" interval="200ms" Feb 13 20:16:43.310671 kubelet[2393]: I0213 20:16:43.310619 2393 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:16:43.310671 kubelet[2393]: I0213 20:16:43.310656 2393 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:16:43.310943 kubelet[2393]: I0213 20:16:43.310791 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:16:43.314949 kubelet[2393]: E0213 20:16:43.314644 2393 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:16:43.329392 kubelet[2393]: I0213 20:16:43.329180 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:16:43.342304 kubelet[2393]: I0213 20:16:43.341197 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:16:43.342304 kubelet[2393]: I0213 20:16:43.341261 2393 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:16:43.342304 kubelet[2393]: I0213 20:16:43.341296 2393 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:16:43.342304 kubelet[2393]: E0213 20:16:43.341387 2393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:16:43.349015 kubelet[2393]: W0213 20:16:43.348943 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.349285 kubelet[2393]: E0213 20:16:43.349262 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.189.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:43.363813 kubelet[2393]: I0213 20:16:43.363768 2393 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:16:43.364183 kubelet[2393]: I0213 20:16:43.364162 2393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:16:43.364281 kubelet[2393]: I0213 20:16:43.364268 2393 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:43.369004 kubelet[2393]: I0213 20:16:43.368926 2393 policy_none.go:49] "None policy: Start" Feb 13 20:16:43.370870 kubelet[2393]: I0213 20:16:43.370817 2393 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:16:43.371092 kubelet[2393]: I0213 20:16:43.371081 2393 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:16:43.380093 kubelet[2393]: I0213 20:16:43.380034 2393 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:16:43.380669 kubelet[2393]: I0213 20:16:43.380611 2393 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:16:43.380997 kubelet[2393]: I0213 20:16:43.380984 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:16:43.388963 kubelet[2393]: E0213 20:16:43.388927 2393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-d-1eeb8951e4\" not found" Feb 13 20:16:43.401276 kubelet[2393]: I0213 20:16:43.400630 2393 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.401684 kubelet[2393]: E0213 20:16:43.401644 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.10:6443/api/v1/nodes\": dial tcp 137.184.189.10:6443: connect: connection refused" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.442396 kubelet[2393]: I0213 20:16:43.442219 2393 topology_manager.go:215] "Topology Admit Handler" podUID="b84f2ce68df271cb55d11d2ce0117130" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.444475 kubelet[2393]: I0213 20:16:43.443914 2393 topology_manager.go:215] "Topology Admit Handler" podUID="061857c1455f8987c5be9fc90fa140a4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.445901 kubelet[2393]: I0213 20:16:43.445731 2393 topology_manager.go:215] "Topology Admit Handler" podUID="a5db0bf093c980ede85b89344662693d" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.501066 kubelet[2393]: I0213 20:16:43.500809 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.501325 kubelet[2393]: E0213 20:16:43.501102 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-d-1eeb8951e4?timeout=10s\": dial tcp 137.184.189.10:6443: connect: connection refused" interval="400ms" Feb 13 20:16:43.501398 kubelet[2393]: I0213 20:16:43.501294 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.501595 kubelet[2393]: I0213 20:16:43.501572 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5db0bf093c980ede85b89344662693d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-d-1eeb8951e4\" (UID: \"a5db0bf093c980ede85b89344662693d\") " pod="kube-system/kube-scheduler-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.501824 kubelet[2393]: I0213 20:16:43.501799 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.502033 kubelet[2393]: I0213 20:16:43.501981 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.502190 kubelet[2393]: I0213 20:16:43.502170 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.502350 kubelet[2393]: I0213 20:16:43.502331 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.502472 kubelet[2393]: I0213 20:16:43.502459 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.502579 kubelet[2393]: I0213 20:16:43.502568 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.605695 kubelet[2393]: I0213 20:16:43.605630 2393 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.607507 kubelet[2393]: E0213 20:16:43.607457 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.10:6443/api/v1/nodes\": dial tcp 137.184.189.10:6443: connect: connection refused" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:43.749910 kubelet[2393]: E0213 20:16:43.749768 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:43.751585 containerd[1590]: time="2025-02-13T20:16:43.751023163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-d-1eeb8951e4,Uid:b84f2ce68df271cb55d11d2ce0117130,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:43.756338 kubelet[2393]: E0213 20:16:43.756283 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:43.757646 kubelet[2393]: E0213 20:16:43.757103 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:43.761528 containerd[1590]: time="2025-02-13T20:16:43.761441184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-d-1eeb8951e4,Uid:a5db0bf093c980ede85b89344662693d,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:43.762100 containerd[1590]: time="2025-02-13T20:16:43.761992261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-d-1eeb8951e4,Uid:061857c1455f8987c5be9fc90fa140a4,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:43.902313 kubelet[2393]: E0213 20:16:43.902236 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-d-1eeb8951e4?timeout=10s\": dial tcp 137.184.189.10:6443: connect: connection refused" interval="800ms" Feb 13 20:16:44.010526 kubelet[2393]: I0213 20:16:44.009943 2393 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:44.010526 kubelet[2393]: E0213 20:16:44.010359 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.10:6443/api/v1/nodes\": dial tcp 137.184.189.10:6443: connect: connection refused" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:44.226989 kubelet[2393]: W0213 20:16:44.226763 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.226989 kubelet[2393]: E0213 20:16:44.226921 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.189.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.370692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568674194.mount: Deactivated successfully. Feb 13 20:16:44.387236 containerd[1590]: time="2025-02-13T20:16:44.387073071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:44.391988 containerd[1590]: time="2025-02-13T20:16:44.390899714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:44.401727 containerd[1590]: time="2025-02-13T20:16:44.401176435Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:44.406651 containerd[1590]: time="2025-02-13T20:16:44.403162178Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:44.406651 containerd[1590]: time="2025-02-13T20:16:44.405631411Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:44.412641 containerd[1590]: time="2025-02-13T20:16:44.410863670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:44.418819 containerd[1590]: time="2025-02-13T20:16:44.416980053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:44.418819 containerd[1590]: time="2025-02-13T20:16:44.418581104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.511982ms" Feb 13 20:16:44.420291 containerd[1590]: time="2025-02-13T20:16:44.419432013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:16:44.422867 containerd[1590]: time="2025-02-13T20:16:44.422755955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.171101ms" Feb 13 20:16:44.447485 containerd[1590]: time="2025-02-13T20:16:44.447382793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 696.229063ms" Feb 13 20:16:44.636410 kubelet[2393]: W0213 20:16:44.625447 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.636410 kubelet[2393]: E0213 20:16:44.625573 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.189.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.710081 kubelet[2393]: E0213 20:16:44.705622 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-d-1eeb8951e4?timeout=10s\": dial tcp 137.184.189.10:6443: connect: connection refused" interval="1.6s" Feb 13 20:16:44.715651 kubelet[2393]: W0213 20:16:44.715438 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-d-1eeb8951e4&limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.715651 kubelet[2393]: E0213 20:16:44.715584 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.189.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-d-1eeb8951e4&limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.816663 kubelet[2393]: I0213 20:16:44.816619 2393 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:44.821228 kubelet[2393]: E0213 20:16:44.821150 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.10:6443/api/v1/nodes\": dial tcp 137.184.189.10:6443: connect: connection refused" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:44.838664 containerd[1590]: time="2025-02-13T20:16:44.838332477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:44.841563 containerd[1590]: time="2025-02-13T20:16:44.840250745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:44.842642 containerd[1590]: time="2025-02-13T20:16:44.841323793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.842642 containerd[1590]: time="2025-02-13T20:16:44.842503008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.869232 containerd[1590]: time="2025-02-13T20:16:44.868594316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:44.869463 containerd[1590]: time="2025-02-13T20:16:44.869361444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:44.869817 containerd[1590]: time="2025-02-13T20:16:44.869462161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.873741 containerd[1590]: time="2025-02-13T20:16:44.870295650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.873741 containerd[1590]: time="2025-02-13T20:16:44.871740886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:44.873741 containerd[1590]: time="2025-02-13T20:16:44.871823421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:44.873741 containerd[1590]: time="2025-02-13T20:16:44.871897620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.873741 containerd[1590]: time="2025-02-13T20:16:44.872089171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:44.897963 kubelet[2393]: W0213 20:16:44.897261 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:44.899856 kubelet[2393]: E0213 20:16:44.898904 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.189.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:45.065948 containerd[1590]: time="2025-02-13T20:16:45.064884866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-d-1eeb8951e4,Uid:a5db0bf093c980ede85b89344662693d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b09bbdd7cd9af4a5adf3b95565e0303319159235c2cdef84093c2a80cc1ed74\"" Feb 13 20:16:45.072941 containerd[1590]: time="2025-02-13T20:16:45.072870397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-d-1eeb8951e4,Uid:b84f2ce68df271cb55d11d2ce0117130,Namespace:kube-system,Attempt:0,} returns sandbox id \"1270b4a69365936657899c36a37bf82fd58cd2e64cd76633e4b5d084bfc0cc5c\"" Feb 13 20:16:45.078673 kubelet[2393]: E0213 20:16:45.075279 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:45.078673 kubelet[2393]: E0213 20:16:45.075370 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:45.085671 containerd[1590]: time="2025-02-13T20:16:45.084527497Z" level=info msg="CreateContainer within sandbox \"2b09bbdd7cd9af4a5adf3b95565e0303319159235c2cdef84093c2a80cc1ed74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:16:45.086142 containerd[1590]: time="2025-02-13T20:16:45.084941957Z" level=info msg="CreateContainer within sandbox \"1270b4a69365936657899c36a37bf82fd58cd2e64cd76633e4b5d084bfc0cc5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:16:45.114824 containerd[1590]: time="2025-02-13T20:16:45.114763987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-d-1eeb8951e4,Uid:061857c1455f8987c5be9fc90fa140a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"79d6c23af9ac603e8c3f1e411be47554e1824be766df1d48dff079da5cbcb020\"" Feb 13 20:16:45.118807 kubelet[2393]: E0213 20:16:45.118292 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:45.123356 containerd[1590]: time="2025-02-13T20:16:45.123191853Z" level=info msg="CreateContainer within sandbox \"79d6c23af9ac603e8c3f1e411be47554e1824be766df1d48dff079da5cbcb020\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:16:45.164814 containerd[1590]: time="2025-02-13T20:16:45.164589991Z" level=info msg="CreateContainer within sandbox \"1270b4a69365936657899c36a37bf82fd58cd2e64cd76633e4b5d084bfc0cc5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4323de46d1d9034f299bc6ecb6618ed453c8fe25342a24270759529e4afa486\"" Feb 13 20:16:45.166589 containerd[1590]: time="2025-02-13T20:16:45.166256031Z" level=info msg="CreateContainer within sandbox \"2b09bbdd7cd9af4a5adf3b95565e0303319159235c2cdef84093c2a80cc1ed74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a60cfc30dd48fa8c589fe2e986e4b306d172fc29704af709fcc47d99ad355311\"" Feb 13 20:16:45.170068 containerd[1590]: time="2025-02-13T20:16:45.169937603Z" level=info msg="StartContainer for \"a60cfc30dd48fa8c589fe2e986e4b306d172fc29704af709fcc47d99ad355311\"" Feb 13 20:16:45.177403 containerd[1590]: time="2025-02-13T20:16:45.177322832Z" level=info msg="CreateContainer within sandbox \"79d6c23af9ac603e8c3f1e411be47554e1824be766df1d48dff079da5cbcb020\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95315c9d49f4157b51d1fd994e74ddb25271ba6372389c06af8debcd56d4e668\"" Feb 13 20:16:45.180778 containerd[1590]: time="2025-02-13T20:16:45.169935530Z" level=info msg="StartContainer for \"d4323de46d1d9034f299bc6ecb6618ed453c8fe25342a24270759529e4afa486\"" Feb 13 20:16:45.181176 containerd[1590]: time="2025-02-13T20:16:45.181123957Z" level=info msg="StartContainer for \"95315c9d49f4157b51d1fd994e74ddb25271ba6372389c06af8debcd56d4e668\"" Feb 13 20:16:45.270440 kubelet[2393]: E0213 20:16:45.270096 2393 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.189.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.189.10:6443: connect: connection refused Feb 13 20:16:45.466215 containerd[1590]: time="2025-02-13T20:16:45.466043460Z" level=info msg="StartContainer for \"a60cfc30dd48fa8c589fe2e986e4b306d172fc29704af709fcc47d99ad355311\" returns successfully" Feb 13 20:16:45.516494 containerd[1590]: time="2025-02-13T20:16:45.516343750Z" level=info msg="StartContainer for \"d4323de46d1d9034f299bc6ecb6618ed453c8fe25342a24270759529e4afa486\" returns successfully" Feb 13 20:16:45.582636 containerd[1590]: time="2025-02-13T20:16:45.582201611Z" level=info msg="StartContainer for \"95315c9d49f4157b51d1fd994e74ddb25271ba6372389c06af8debcd56d4e668\" returns successfully" Feb 13 20:16:46.428979 kubelet[2393]: I0213 20:16:46.427405 2393 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:46.469886 kubelet[2393]: E0213 20:16:46.468976 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:46.488094 kubelet[2393]: E0213 20:16:46.488029 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:46.491740 kubelet[2393]: E0213 20:16:46.491685 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:47.501462 kubelet[2393]: E0213 20:16:47.501369 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:47.505019 kubelet[2393]: E0213 20:16:47.504952 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:47.510205 kubelet[2393]: E0213 20:16:47.510047 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:48.502112 kubelet[2393]: E0213 20:16:48.502046 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:48.505889 kubelet[2393]: E0213 20:16:48.504792 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:49.709427 update_engine[1565]: I20250213 20:16:49.692946 1565 update_attempter.cc:509] Updating boot flags... Feb 13 20:16:49.835884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2669) Feb 13 20:16:49.971935 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2669) Feb 13 20:16:50.121126 kubelet[2393]: E0213 20:16:50.118483 2393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-d-1eeb8951e4\" not found" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:50.215199 kubelet[2393]: E0213 20:16:50.210631 2393 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.1-d-1eeb8951e4.1823ddd88f73b70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-d-1eeb8951e4,UID:ci-4081.3.1-d-1eeb8951e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-d-1eeb8951e4,},FirstTimestamp:2025-02-13 20:16:43.284543242 +0000 UTC m=+0.829952122,LastTimestamp:2025-02-13 20:16:43.284543242 +0000 UTC m=+0.829952122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-d-1eeb8951e4,}" Feb 13 20:16:50.309027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2669) Feb 13 20:16:50.310099 kubelet[2393]: I0213 20:16:50.309949 2393 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:50.323582 kubelet[2393]: I0213 20:16:50.321941 2393 apiserver.go:52] "Watching apiserver" Feb 13 20:16:50.367921 kubelet[2393]: E0213 20:16:50.355281 2393 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.1-d-1eeb8951e4.1823ddd8913e8325 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-d-1eeb8951e4,UID:ci-4081.3.1-d-1eeb8951e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-d-1eeb8951e4,},FirstTimestamp:2025-02-13 20:16:43.314610981 +0000 UTC m=+0.860019869,LastTimestamp:2025-02-13 20:16:43.314610981 +0000 UTC m=+0.860019869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-d-1eeb8951e4,}" Feb 13 20:16:50.400280 kubelet[2393]: I0213 20:16:50.399635 2393 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:16:50.435355 kubelet[2393]: E0213 20:16:50.434970 2393 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.1-d-1eeb8951e4.1823ddd8941baffe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-d-1eeb8951e4,UID:ci-4081.3.1-d-1eeb8951e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.1-d-1eeb8951e4 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-d-1eeb8951e4,},FirstTimestamp:2025-02-13 20:16:43.36266035 +0000 UTC m=+0.908069221,LastTimestamp:2025-02-13 20:16:43.36266035 +0000 UTC m=+0.908069221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-d-1eeb8951e4,}" Feb 13 20:16:52.710734 kubelet[2393]: W0213 20:16:52.707589 2393 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:52.710734 kubelet[2393]: E0213 20:16:52.708277 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:53.414090 kubelet[2393]: W0213 20:16:53.414032 2393 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:53.420898 kubelet[2393]: E0213 20:16:53.414565 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:53.562966 systemd[1]: Reloading requested from client PID 2680 ('systemctl') (unit session-7.scope)... Feb 13 20:16:53.566565 systemd[1]: Reloading... Feb 13 20:16:53.580038 kubelet[2393]: E0213 20:16:53.577562 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:53.580038 kubelet[2393]: E0213 20:16:53.577690 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:53.955879 zram_generator::config[2722]: No configuration found. Feb 13 20:16:54.343966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:54.627124 systemd[1]: Reloading finished in 1059 ms. Feb 13 20:16:54.709396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:54.728207 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:16:54.728700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:54.747044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:55.096155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:55.114874 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:55.392033 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.392033 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:55.392033 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.395379 kubelet[2780]: I0213 20:16:55.395229 2780 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:16:55.421923 kubelet[2780]: I0213 20:16:55.420495 2780 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:16:55.421923 kubelet[2780]: I0213 20:16:55.420543 2780 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:16:55.421923 kubelet[2780]: I0213 20:16:55.421004 2780 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:16:55.426328 kubelet[2780]: I0213 20:16:55.425994 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:16:55.447079 kubelet[2780]: I0213 20:16:55.447025 2780 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:55.479636 kubelet[2780]: I0213 20:16:55.479585 2780 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:16:55.481211 kubelet[2780]: I0213 20:16:55.481142 2780 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:16:55.482066 kubelet[2780]: I0213 20:16:55.481386 2780 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-d-1eeb8951e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:16:55.482066 kubelet[2780]: I0213 20:16:55.481752 2780 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:16:55.482066 kubelet[2780]: I0213 20:16:55.481777 2780 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:16:55.482066 kubelet[2780]: I0213 20:16:55.481879 2780 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.482859 kubelet[2780]: I0213 20:16:55.482562 2780 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:16:55.484301 kubelet[2780]: I0213 20:16:55.484227 2780 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:16:55.484632 kubelet[2780]: I0213 20:16:55.484574 2780 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:16:55.485257 kubelet[2780]: I0213 20:16:55.485033 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:16:55.503107 kubelet[2780]: I0213 20:16:55.502761 2780 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:16:55.523959 kubelet[2780]: I0213 20:16:55.521381 2780 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:16:55.527947 kubelet[2780]: I0213 20:16:55.526796 2780 server.go:1264] "Started kubelet" Feb 13 20:16:55.532880 kubelet[2780]: I0213 20:16:55.532725 2780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:16:55.538096 kubelet[2780]: I0213 20:16:55.535251 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:16:55.538771 kubelet[2780]: I0213 20:16:55.538637 2780 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:16:55.540879 kubelet[2780]: I0213 20:16:55.540696 2780 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:16:55.547902 kubelet[2780]: I0213 20:16:55.547867 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:16:55.556433 kubelet[2780]: I0213 20:16:55.556361 2780 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:16:55.559810 kubelet[2780]: I0213 20:16:55.559765 2780 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:16:55.574657 kubelet[2780]: I0213 20:16:55.570243 2780 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:16:55.582229 kubelet[2780]: I0213 20:16:55.582185 2780 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:16:55.583854 kubelet[2780]: I0213 20:16:55.582970 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:16:55.595791 kubelet[2780]: I0213 20:16:55.595726 2780 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:16:55.603925 kubelet[2780]: E0213 20:16:55.598625 2780 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:16:55.684179 kubelet[2780]: I0213 20:16:55.683467 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:16:55.697996 kubelet[2780]: I0213 20:16:55.697760 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:16:55.698863 kubelet[2780]: I0213 20:16:55.698463 2780 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:16:55.699514 kubelet[2780]: I0213 20:16:55.699197 2780 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:16:55.703111 kubelet[2780]: E0213 20:16:55.702987 2780 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:16:55.713885 kubelet[2780]: I0213 20:16:55.712134 2780 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:55.799873 kubelet[2780]: I0213 20:16:55.798327 2780 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:55.799873 kubelet[2780]: I0213 20:16:55.798476 2780 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:55.805858 kubelet[2780]: E0213 20:16:55.805721 2780 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:16:55.915460 kubelet[2780]: I0213 20:16:55.915088 2780 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:16:55.915460 kubelet[2780]: I0213 20:16:55.915132 2780 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:16:55.915460 kubelet[2780]: I0213 20:16:55.915169 2780 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.916349 kubelet[2780]: I0213 20:16:55.916240 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:16:55.916349 kubelet[2780]: I0213 20:16:55.916274 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:16:55.916349 kubelet[2780]: I0213 20:16:55.916303 2780 policy_none.go:49] "None policy: Start" Feb 13 20:16:55.918613 kubelet[2780]: I0213 20:16:55.918354 2780 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:16:55.918613 kubelet[2780]: I0213 20:16:55.918461 2780 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:16:55.919311 kubelet[2780]: I0213 20:16:55.919193 2780 state_mem.go:75] "Updated machine memory state" Feb 13 20:16:55.922910 kubelet[2780]: I0213 20:16:55.922508 2780 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:16:55.923229 kubelet[2780]: I0213 20:16:55.923145 2780 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:16:55.928025 kubelet[2780]: I0213 20:16:55.926921 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:16:56.006974 kubelet[2780]: I0213 20:16:56.006360 2780 topology_manager.go:215] "Topology Admit Handler" podUID="b84f2ce68df271cb55d11d2ce0117130" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.017022 kubelet[2780]: I0213 20:16:56.008360 2780 topology_manager.go:215] "Topology Admit Handler" podUID="061857c1455f8987c5be9fc90fa140a4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.017022 kubelet[2780]: I0213 20:16:56.008590 2780 topology_manager.go:215] "Topology Admit Handler" podUID="a5db0bf093c980ede85b89344662693d" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.081894 kubelet[2780]: I0213 20:16:56.080622 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.081894 kubelet[2780]: I0213 20:16:56.080717 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.081894 kubelet[2780]: I0213 20:16:56.080782 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.081894 kubelet[2780]: I0213 20:16:56.080926 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5db0bf093c980ede85b89344662693d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-d-1eeb8951e4\" (UID: \"a5db0bf093c980ede85b89344662693d\") " pod="kube-system/kube-scheduler-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.081894 kubelet[2780]: I0213 20:16:56.080969 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.082324 kubelet[2780]: I0213 20:16:56.081001 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.082324 kubelet[2780]: I0213 20:16:56.081032 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.082324 kubelet[2780]: I0213 20:16:56.081065 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/061857c1455f8987c5be9fc90fa140a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-d-1eeb8951e4\" (UID: \"061857c1455f8987c5be9fc90fa140a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.082324 kubelet[2780]: I0213 20:16:56.081100 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b84f2ce68df271cb55d11d2ce0117130-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" (UID: \"b84f2ce68df271cb55d11d2ce0117130\") " pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.096485 kubelet[2780]: W0213 20:16:56.096336 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:56.098241 kubelet[2780]: W0213 20:16:56.098210 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:56.098501 kubelet[2780]: E0213 20:16:56.098399 2780 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-d-1eeb8951e4\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.101339 kubelet[2780]: W0213 20:16:56.101299 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:56.102852 kubelet[2780]: E0213 20:16:56.102116 2780 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.1-d-1eeb8951e4\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-d-1eeb8951e4" Feb 13 20:16:56.401046 kubelet[2780]: E0213 20:16:56.400796 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.405817 kubelet[2780]: E0213 20:16:56.403399 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.408081 kubelet[2780]: E0213 20:16:56.407436 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.509254 kubelet[2780]: I0213 20:16:56.509163 2780 apiserver.go:52] "Watching apiserver" Feb 13 20:16:56.576750 kubelet[2780]: I0213 20:16:56.576524 2780 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:16:56.801458 kubelet[2780]: E0213 20:16:56.800729 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.804256 kubelet[2780]: E0213 20:16:56.802485 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.805073 kubelet[2780]: E0213 20:16:56.804479 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:56.958868 kubelet[2780]: I0213 20:16:56.957846 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-d-1eeb8951e4" podStartSLOduration=3.957776252 podStartE2EDuration="3.957776252s" podCreationTimestamp="2025-02-13 20:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:56.95286083 +0000 UTC m=+1.811051443" watchObservedRunningTime="2025-02-13 20:16:56.957776252 +0000 UTC m=+1.815966841" Feb 13 20:16:57.804922 kubelet[2780]: E0213 20:16:57.802683 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:58.428814 kubelet[2780]: E0213 20:16:58.427983 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:58.481684 kubelet[2780]: I0213 20:16:58.481408 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-d-1eeb8951e4" podStartSLOduration=2.48137679 podStartE2EDuration="2.48137679s" podCreationTimestamp="2025-02-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:57.05060722 +0000 UTC m=+1.908797825" watchObservedRunningTime="2025-02-13 20:16:58.48137679 +0000 UTC m=+3.339567403" Feb 13 20:16:58.810548 kubelet[2780]: E0213 20:16:58.807815 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:01.163095 kubelet[2780]: E0213 20:17:01.163034 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:01.840029 kubelet[2780]: E0213 20:17:01.839220 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.105017 kubelet[2780]: E0213 20:17:02.104046 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.851894 kubelet[2780]: E0213 20:17:02.849632 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.851894 kubelet[2780]: E0213 20:17:02.849913 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:03.248231 sudo[1800]: pam_unix(sudo:session): session closed for user root Feb 13 20:17:03.255787 sshd[1793]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:03.266220 systemd[1]: sshd@6-137.184.189.10:22-147.75.109.163:59270.service: Deactivated successfully. Feb 13 20:17:03.276573 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:17:03.277465 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:17:03.282039 systemd-logind[1558]: Removed session 7. Feb 13 20:17:07.271972 kubelet[2780]: I0213 20:17:07.271513 2780 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:07.275027 containerd[1590]: time="2025-02-13T20:17:07.273650115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:07.277387 kubelet[2780]: I0213 20:17:07.274145 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:07.761624 kubelet[2780]: I0213 20:17:07.761492 2780 topology_manager.go:215] "Topology Admit Handler" podUID="e0b7717a-4974-4df2-b254-0a7a8b80e8af" podNamespace="kube-system" podName="kube-proxy-6xrqz" Feb 13 20:17:07.891874 kubelet[2780]: I0213 20:17:07.889667 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0b7717a-4974-4df2-b254-0a7a8b80e8af-kube-proxy\") pod \"kube-proxy-6xrqz\" (UID: \"e0b7717a-4974-4df2-b254-0a7a8b80e8af\") " pod="kube-system/kube-proxy-6xrqz" Feb 13 20:17:07.891874 kubelet[2780]: I0213 20:17:07.889740 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0b7717a-4974-4df2-b254-0a7a8b80e8af-xtables-lock\") pod \"kube-proxy-6xrqz\" (UID: \"e0b7717a-4974-4df2-b254-0a7a8b80e8af\") " pod="kube-system/kube-proxy-6xrqz" Feb 13 20:17:07.891874 kubelet[2780]: I0213 20:17:07.889781 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5hs\" (UniqueName: \"kubernetes.io/projected/e0b7717a-4974-4df2-b254-0a7a8b80e8af-kube-api-access-2j5hs\") pod \"kube-proxy-6xrqz\" (UID: \"e0b7717a-4974-4df2-b254-0a7a8b80e8af\") " pod="kube-system/kube-proxy-6xrqz" Feb 13 20:17:07.891874 kubelet[2780]: I0213 20:17:07.889815 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0b7717a-4974-4df2-b254-0a7a8b80e8af-lib-modules\") pod \"kube-proxy-6xrqz\" (UID: \"e0b7717a-4974-4df2-b254-0a7a8b80e8af\") " pod="kube-system/kube-proxy-6xrqz" Feb 13 20:17:07.957872 kubelet[2780]: I0213 20:17:07.956169 2780 topology_manager.go:215] "Topology Admit Handler" podUID="e489e4fc-7387-4a76-9ab0-d70aadb82072" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-rc7vr" Feb 13 20:17:08.077748 kubelet[2780]: E0213 20:17:08.077128 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:08.081582 containerd[1590]: time="2025-02-13T20:17:08.081512866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xrqz,Uid:e0b7717a-4974-4df2-b254-0a7a8b80e8af,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:08.091861 kubelet[2780]: I0213 20:17:08.091649 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e489e4fc-7387-4a76-9ab0-d70aadb82072-var-lib-calico\") pod \"tigera-operator-7bc55997bb-rc7vr\" (UID: \"e489e4fc-7387-4a76-9ab0-d70aadb82072\") " pod="tigera-operator/tigera-operator-7bc55997bb-rc7vr" Feb 13 20:17:08.093886 kubelet[2780]: I0213 20:17:08.093728 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dq4\" (UniqueName: \"kubernetes.io/projected/e489e4fc-7387-4a76-9ab0-d70aadb82072-kube-api-access-68dq4\") pod \"tigera-operator-7bc55997bb-rc7vr\" (UID: \"e489e4fc-7387-4a76-9ab0-d70aadb82072\") " pod="tigera-operator/tigera-operator-7bc55997bb-rc7vr" Feb 13 20:17:08.163192 containerd[1590]: time="2025-02-13T20:17:08.160864423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:08.163192 containerd[1590]: time="2025-02-13T20:17:08.163122997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:08.163898 containerd[1590]: time="2025-02-13T20:17:08.163505299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:08.164089 containerd[1590]: time="2025-02-13T20:17:08.163864306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:08.199372 systemd[1]: run-containerd-runc-k8s.io-6914aa8bf67700f84f206be7d273738c86ab3932b06a301b55449422c6ecaca7-runc.rHhpGu.mount: Deactivated successfully. Feb 13 20:17:08.266115 containerd[1590]: time="2025-02-13T20:17:08.265588475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rc7vr,Uid:e489e4fc-7387-4a76-9ab0-d70aadb82072,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:17:08.274413 containerd[1590]: time="2025-02-13T20:17:08.273999355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xrqz,Uid:e0b7717a-4974-4df2-b254-0a7a8b80e8af,Namespace:kube-system,Attempt:0,} returns sandbox id \"6914aa8bf67700f84f206be7d273738c86ab3932b06a301b55449422c6ecaca7\"" Feb 13 20:17:08.277377 kubelet[2780]: E0213 20:17:08.277051 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:08.295423 containerd[1590]: time="2025-02-13T20:17:08.294537049Z" level=info msg="CreateContainer within sandbox \"6914aa8bf67700f84f206be7d273738c86ab3932b06a301b55449422c6ecaca7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:08.357188 containerd[1590]: time="2025-02-13T20:17:08.356679040Z" level=info msg="CreateContainer within sandbox \"6914aa8bf67700f84f206be7d273738c86ab3932b06a301b55449422c6ecaca7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82f0525eb29763aec477de6fff938d0d1588adde618c802e06b90caab4f1fe3b\"" Feb 13 20:17:08.361149 containerd[1590]: time="2025-02-13T20:17:08.358891465Z" level=info msg="StartContainer for \"82f0525eb29763aec477de6fff938d0d1588adde618c802e06b90caab4f1fe3b\"" Feb 13 20:17:08.387995 containerd[1590]: time="2025-02-13T20:17:08.382197860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:08.387995 containerd[1590]: time="2025-02-13T20:17:08.382314411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:08.387995 containerd[1590]: time="2025-02-13T20:17:08.382352405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:08.387995 containerd[1590]: time="2025-02-13T20:17:08.382540175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:08.570043 containerd[1590]: time="2025-02-13T20:17:08.561537191Z" level=info msg="StartContainer for \"82f0525eb29763aec477de6fff938d0d1588adde618c802e06b90caab4f1fe3b\" returns successfully" Feb 13 20:17:08.615199 containerd[1590]: time="2025-02-13T20:17:08.614292863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rc7vr,Uid:e489e4fc-7387-4a76-9ab0-d70aadb82072,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"65de6e455fe823b1edf9eaee5c6a702bd62677c96cdcde51d7cba916efc4941f\"" Feb 13 20:17:08.642256 containerd[1590]: time="2025-02-13T20:17:08.642180210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:17:08.895033 kubelet[2780]: E0213 20:17:08.894225 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:11.504148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84714720.mount: Deactivated successfully. Feb 13 20:17:12.819200 containerd[1590]: time="2025-02-13T20:17:12.815508455Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.826162 containerd[1590]: time="2025-02-13T20:17:12.824871944Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:17:12.830080 containerd[1590]: time="2025-02-13T20:17:12.829993200Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.834803 containerd[1590]: time="2025-02-13T20:17:12.834726569Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.835452 containerd[1590]: time="2025-02-13T20:17:12.835409652Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.193154248s" Feb 13 20:17:12.835568 containerd[1590]: time="2025-02-13T20:17:12.835456477Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:17:12.852634 containerd[1590]: time="2025-02-13T20:17:12.852566191Z" level=info msg="CreateContainer within sandbox \"65de6e455fe823b1edf9eaee5c6a702bd62677c96cdcde51d7cba916efc4941f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:17:12.911956 containerd[1590]: time="2025-02-13T20:17:12.911058315Z" level=info msg="CreateContainer within sandbox \"65de6e455fe823b1edf9eaee5c6a702bd62677c96cdcde51d7cba916efc4941f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03d399cf13981c8a3f1b94bde1c017050e9fd2835e59391c728a5e10102e8226\"" Feb 13 20:17:12.915476 containerd[1590]: time="2025-02-13T20:17:12.915203953Z" level=info msg="StartContainer for \"03d399cf13981c8a3f1b94bde1c017050e9fd2835e59391c728a5e10102e8226\"" Feb 13 20:17:13.034398 systemd[1]: run-containerd-runc-k8s.io-03d399cf13981c8a3f1b94bde1c017050e9fd2835e59391c728a5e10102e8226-runc.9j7TIL.mount: Deactivated successfully. Feb 13 20:17:13.195425 containerd[1590]: time="2025-02-13T20:17:13.195236947Z" level=info msg="StartContainer for \"03d399cf13981c8a3f1b94bde1c017050e9fd2835e59391c728a5e10102e8226\" returns successfully" Feb 13 20:17:13.975589 kubelet[2780]: I0213 20:17:13.975221 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xrqz" podStartSLOduration=6.9751860390000004 podStartE2EDuration="6.975186039s" podCreationTimestamp="2025-02-13 20:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:08.940720131 +0000 UTC m=+13.798910761" watchObservedRunningTime="2025-02-13 20:17:13.975186039 +0000 UTC m=+18.833376658" Feb 13 20:17:15.735691 kubelet[2780]: I0213 20:17:15.735114 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-rc7vr" podStartSLOduration=4.5119998599999995 podStartE2EDuration="8.735081913s" podCreationTimestamp="2025-02-13 20:17:07 +0000 UTC" firstStartedPulling="2025-02-13 20:17:08.62306984 +0000 UTC m=+13.481260422" lastFinishedPulling="2025-02-13 20:17:12.846151892 +0000 UTC m=+17.704342475" observedRunningTime="2025-02-13 20:17:13.981990181 +0000 UTC m=+18.840180792" watchObservedRunningTime="2025-02-13 20:17:15.735081913 +0000 UTC m=+20.593272621" Feb 13 20:17:17.288927 kubelet[2780]: I0213 20:17:17.286963 2780 topology_manager.go:215] "Topology Admit Handler" podUID="9232ee99-2e8a-405b-aba0-b5ba6f8726a7" podNamespace="calico-system" podName="calico-typha-86664b45bc-drgq8" Feb 13 20:17:17.358980 kubelet[2780]: I0213 20:17:17.358303 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdpvb\" (UniqueName: \"kubernetes.io/projected/9232ee99-2e8a-405b-aba0-b5ba6f8726a7-kube-api-access-kdpvb\") pod \"calico-typha-86664b45bc-drgq8\" (UID: \"9232ee99-2e8a-405b-aba0-b5ba6f8726a7\") " pod="calico-system/calico-typha-86664b45bc-drgq8" Feb 13 20:17:17.358980 kubelet[2780]: I0213 20:17:17.358392 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9232ee99-2e8a-405b-aba0-b5ba6f8726a7-tigera-ca-bundle\") pod \"calico-typha-86664b45bc-drgq8\" (UID: \"9232ee99-2e8a-405b-aba0-b5ba6f8726a7\") " pod="calico-system/calico-typha-86664b45bc-drgq8" Feb 13 20:17:17.358980 kubelet[2780]: I0213 20:17:17.358429 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9232ee99-2e8a-405b-aba0-b5ba6f8726a7-typha-certs\") pod \"calico-typha-86664b45bc-drgq8\" (UID: \"9232ee99-2e8a-405b-aba0-b5ba6f8726a7\") " pod="calico-system/calico-typha-86664b45bc-drgq8" Feb 13 20:17:17.497874 kubelet[2780]: I0213 20:17:17.497220 2780 topology_manager.go:215] "Topology Admit Handler" podUID="8b8fb456-b7cb-4bf2-8666-7b3421dd4327" podNamespace="calico-system" podName="calico-node-m4gf5" Feb 13 20:17:17.560539 kubelet[2780]: I0213 20:17:17.560351 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-cni-bin-dir\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562122 kubelet[2780]: I0213 20:17:17.561460 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2459\" (UniqueName: \"kubernetes.io/projected/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-kube-api-access-j2459\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562122 kubelet[2780]: I0213 20:17:17.561596 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-node-certs\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562122 kubelet[2780]: I0213 20:17:17.561620 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-var-run-calico\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562122 kubelet[2780]: I0213 20:17:17.561641 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-cni-net-dir\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562122 kubelet[2780]: I0213 20:17:17.561691 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-flexvol-driver-host\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562465 kubelet[2780]: I0213 20:17:17.561712 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-var-lib-calico\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562465 kubelet[2780]: I0213 20:17:17.561737 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-xtables-lock\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562465 kubelet[2780]: I0213 20:17:17.561758 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-lib-modules\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562465 kubelet[2780]: I0213 20:17:17.561775 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-tigera-ca-bundle\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562465 kubelet[2780]: I0213 20:17:17.561791 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-cni-log-dir\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.562708 kubelet[2780]: I0213 20:17:17.561817 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8b8fb456-b7cb-4bf2-8666-7b3421dd4327-policysync\") pod \"calico-node-m4gf5\" (UID: \"8b8fb456-b7cb-4bf2-8666-7b3421dd4327\") " pod="calico-system/calico-node-m4gf5" Feb 13 20:17:17.619239 kubelet[2780]: E0213 20:17:17.619164 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:17.620693 containerd[1590]: time="2025-02-13T20:17:17.619913646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86664b45bc-drgq8,Uid:9232ee99-2e8a-405b-aba0-b5ba6f8726a7,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:17.782885 kubelet[2780]: E0213 20:17:17.780297 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.782885 kubelet[2780]: W0213 20:17:17.780354 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.782885 kubelet[2780]: E0213 20:17:17.780447 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.807933 containerd[1590]: time="2025-02-13T20:17:17.807732191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:17.807933 containerd[1590]: time="2025-02-13T20:17:17.807948298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:17.808357 containerd[1590]: time="2025-02-13T20:17:17.807988622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:17.808427 containerd[1590]: time="2025-02-13T20:17:17.808366772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:17.822238 kubelet[2780]: E0213 20:17:17.821767 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:17.827856 containerd[1590]: time="2025-02-13T20:17:17.826989565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4gf5,Uid:8b8fb456-b7cb-4bf2-8666-7b3421dd4327,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:17.854593 kubelet[2780]: I0213 20:17:17.854512 2780 topology_manager.go:215] "Topology Admit Handler" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" podNamespace="calico-system" podName="csi-node-driver-7s86p" Feb 13 20:17:17.889797 kubelet[2780]: E0213 20:17:17.888741 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:17.892720 kubelet[2780]: E0213 20:17:17.891112 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.892720 kubelet[2780]: W0213 20:17:17.891200 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.892720 kubelet[2780]: E0213 20:17:17.891263 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.892720 kubelet[2780]: E0213 20:17:17.892059 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.892720 kubelet[2780]: W0213 20:17:17.892084 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.892720 kubelet[2780]: E0213 20:17:17.892109 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.897347 kubelet[2780]: E0213 20:17:17.896320 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.897347 kubelet[2780]: W0213 20:17:17.896365 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.897347 kubelet[2780]: E0213 20:17:17.896422 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.897347 kubelet[2780]: E0213 20:17:17.897096 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.897347 kubelet[2780]: W0213 20:17:17.897116 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.897347 kubelet[2780]: E0213 20:17:17.897141 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.897738 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.900675 kubelet[2780]: W0213 20:17:17.897778 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.897799 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.898176 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.900675 kubelet[2780]: W0213 20:17:17.898190 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.898207 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.898640 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.900675 kubelet[2780]: W0213 20:17:17.898654 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.898671 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.900675 kubelet[2780]: E0213 20:17:17.899382 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.901259 kubelet[2780]: W0213 20:17:17.899397 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.899413 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.899664 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.901259 kubelet[2780]: W0213 20:17:17.899675 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.899688 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.899905 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.901259 kubelet[2780]: W0213 20:17:17.899921 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.899934 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.901259 kubelet[2780]: E0213 20:17:17.900157 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.901259 kubelet[2780]: W0213 20:17:17.900168 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.900180 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.900439 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.910067 kubelet[2780]: W0213 20:17:17.900453 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.900467 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.900722 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.910067 kubelet[2780]: W0213 20:17:17.900738 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.900756 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.901050 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.910067 kubelet[2780]: W0213 20:17:17.901063 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.910067 kubelet[2780]: E0213 20:17:17.901077 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.901605 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.917909 kubelet[2780]: W0213 20:17:17.901619 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.901634 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.901886 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.917909 kubelet[2780]: W0213 20:17:17.901903 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.901919 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.902182 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.917909 kubelet[2780]: W0213 20:17:17.902196 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.902210 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.917909 kubelet[2780]: E0213 20:17:17.902609 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.920627 kubelet[2780]: W0213 20:17:17.902625 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.920627 kubelet[2780]: E0213 20:17:17.902640 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.920627 kubelet[2780]: E0213 20:17:17.902924 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.920627 kubelet[2780]: W0213 20:17:17.902935 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.920627 kubelet[2780]: E0213 20:17:17.902949 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.920627 kubelet[2780]: E0213 20:17:17.903199 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.920627 kubelet[2780]: W0213 20:17:17.903211 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.920627 kubelet[2780]: E0213 20:17:17.903224 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.987607 kubelet[2780]: E0213 20:17:17.987493 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.989196 kubelet[2780]: W0213 20:17:17.987652 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.989196 kubelet[2780]: E0213 20:17:17.988933 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.989196 kubelet[2780]: I0213 20:17:17.989060 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23f309b9-4a91-4126-8a72-5d65e6b18bef-socket-dir\") pod \"csi-node-driver-7s86p\" (UID: \"23f309b9-4a91-4126-8a72-5d65e6b18bef\") " pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:17.990073 kubelet[2780]: E0213 20:17:17.989742 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.990073 kubelet[2780]: W0213 20:17:17.989770 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.990073 kubelet[2780]: E0213 20:17:17.989816 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.990962 kubelet[2780]: E0213 20:17:17.990188 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.990962 kubelet[2780]: W0213 20:17:17.990203 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.990962 kubelet[2780]: E0213 20:17:17.990224 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.990962 kubelet[2780]: I0213 20:17:17.990678 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23f309b9-4a91-4126-8a72-5d65e6b18bef-registration-dir\") pod \"csi-node-driver-7s86p\" (UID: \"23f309b9-4a91-4126-8a72-5d65e6b18bef\") " pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:17.990962 kubelet[2780]: E0213 20:17:17.990813 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.990962 kubelet[2780]: W0213 20:17:17.990876 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.990962 kubelet[2780]: E0213 20:17:17.990892 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.992803 kubelet[2780]: E0213 20:17:17.992762 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.992803 kubelet[2780]: W0213 20:17:17.992794 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.993182 kubelet[2780]: E0213 20:17:17.992820 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.993974 kubelet[2780]: E0213 20:17:17.993942 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.993974 kubelet[2780]: W0213 20:17:17.993974 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.994550 kubelet[2780]: E0213 20:17:17.994495 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.994919 kubelet[2780]: E0213 20:17:17.994812 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.994919 kubelet[2780]: W0213 20:17:17.994851 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.994919 kubelet[2780]: E0213 20:17:17.994874 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.997255 kubelet[2780]: I0213 20:17:17.997118 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qns\" (UniqueName: \"kubernetes.io/projected/23f309b9-4a91-4126-8a72-5d65e6b18bef-kube-api-access-52qns\") pod \"csi-node-driver-7s86p\" (UID: \"23f309b9-4a91-4126-8a72-5d65e6b18bef\") " pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:17.997490 kubelet[2780]: E0213 20:17:17.997384 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.997490 kubelet[2780]: W0213 20:17:17.997402 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.997490 kubelet[2780]: E0213 20:17:17.997428 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.998172 kubelet[2780]: E0213 20:17:17.998119 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.998172 kubelet[2780]: W0213 20:17:17.998145 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.998172 kubelet[2780]: E0213 20:17:17.998169 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.998474 kubelet[2780]: E0213 20:17:17.998451 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.998474 kubelet[2780]: W0213 20:17:17.998471 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.998588 kubelet[2780]: E0213 20:17:17.998486 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.998588 kubelet[2780]: I0213 20:17:17.998528 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23f309b9-4a91-4126-8a72-5d65e6b18bef-varrun\") pod \"csi-node-driver-7s86p\" (UID: \"23f309b9-4a91-4126-8a72-5d65e6b18bef\") " pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:17.999249 kubelet[2780]: E0213 20:17:17.999220 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.999249 kubelet[2780]: W0213 20:17:17.999243 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:17.999403 kubelet[2780]: E0213 20:17:17.999271 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:17.999403 kubelet[2780]: I0213 20:17:17.999299 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f309b9-4a91-4126-8a72-5d65e6b18bef-kubelet-dir\") pod \"csi-node-driver-7s86p\" (UID: \"23f309b9-4a91-4126-8a72-5d65e6b18bef\") " pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:17.999750 kubelet[2780]: E0213 20:17:17.999726 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:17.999750 kubelet[2780]: W0213 20:17:17.999746 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.000022 kubelet[2780]: E0213 20:17:17.999765 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.000101 kubelet[2780]: E0213 20:17:18.000080 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.000101 kubelet[2780]: W0213 20:17:18.000098 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.000199 kubelet[2780]: E0213 20:17:18.000115 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.000485 kubelet[2780]: E0213 20:17:18.000464 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.000485 kubelet[2780]: W0213 20:17:18.000482 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.000603 kubelet[2780]: E0213 20:17:18.000494 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.006075 containerd[1590]: time="2025-02-13T20:17:17.995572936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:18.006075 containerd[1590]: time="2025-02-13T20:17:17.995681980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:18.006075 containerd[1590]: time="2025-02-13T20:17:17.995719125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.006075 containerd[1590]: time="2025-02-13T20:17:17.995944364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.007138 kubelet[2780]: E0213 20:17:18.007073 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.007927 kubelet[2780]: W0213 20:17:18.007117 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.008278 kubelet[2780]: E0213 20:17:18.007944 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.138509 kubelet[2780]: E0213 20:17:18.138395 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.139466 kubelet[2780]: W0213 20:17:18.138512 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.139466 kubelet[2780]: E0213 20:17:18.138558 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.141206 kubelet[2780]: E0213 20:17:18.140936 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.141206 kubelet[2780]: W0213 20:17:18.141205 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.146711 kubelet[2780]: E0213 20:17:18.141356 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.146711 kubelet[2780]: E0213 20:17:18.146462 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.146711 kubelet[2780]: W0213 20:17:18.146491 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.146711 kubelet[2780]: E0213 20:17:18.146533 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.149017 containerd[1590]: time="2025-02-13T20:17:18.148689875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86664b45bc-drgq8,Uid:9232ee99-2e8a-405b-aba0-b5ba6f8726a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"dde0a69fb52dd843d3bcbc8d7f20c51ea887b64596a5f531252754e834405fc7\"" Feb 13 20:17:18.180405 kubelet[2780]: E0213 20:17:18.178350 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.180405 kubelet[2780]: W0213 20:17:18.179200 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.180405 kubelet[2780]: E0213 20:17:18.179283 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.180405 kubelet[2780]: E0213 20:17:18.179864 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.182996 kubelet[2780]: E0213 20:17:18.182357 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.182996 kubelet[2780]: W0213 20:17:18.182425 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.182996 kubelet[2780]: E0213 20:17:18.182459 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.183989 kubelet[2780]: E0213 20:17:18.183955 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.184083 kubelet[2780]: W0213 20:17:18.183989 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.184083 kubelet[2780]: E0213 20:17:18.184021 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.184429 kubelet[2780]: E0213 20:17:18.184407 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.184429 kubelet[2780]: W0213 20:17:18.184426 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.184958 kubelet[2780]: E0213 20:17:18.184439 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.187726 kubelet[2780]: E0213 20:17:18.187478 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.187726 kubelet[2780]: W0213 20:17:18.187531 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.187726 kubelet[2780]: E0213 20:17:18.187581 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.189174 kubelet[2780]: E0213 20:17:18.189013 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.189174 kubelet[2780]: W0213 20:17:18.189047 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.189174 kubelet[2780]: E0213 20:17:18.189116 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.191051 kubelet[2780]: E0213 20:17:18.190496 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.191051 kubelet[2780]: W0213 20:17:18.190514 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.191415 kubelet[2780]: E0213 20:17:18.191379 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.192363 kubelet[2780]: W0213 20:17:18.191397 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.193421 kubelet[2780]: E0213 20:17:18.193301 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.193421 kubelet[2780]: W0213 20:17:18.193328 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.196766 kubelet[2780]: E0213 20:17:18.194759 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.196766 kubelet[2780]: W0213 20:17:18.194782 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.198923 kubelet[2780]: E0213 20:17:18.198822 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.198923 kubelet[2780]: W0213 20:17:18.198883 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.198923 kubelet[2780]: E0213 20:17:18.198924 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.201260 containerd[1590]: time="2025-02-13T20:17:18.201103938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4gf5,Uid:8b8fb456-b7cb-4bf2-8666-7b3421dd4327,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\"" Feb 13 20:17:18.207225 containerd[1590]: time="2025-02-13T20:17:18.206881812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:17:18.208315 kubelet[2780]: E0213 20:17:18.208075 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.208315 kubelet[2780]: E0213 20:17:18.208063 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.208315 kubelet[2780]: E0213 20:17:18.208211 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.208315 kubelet[2780]: E0213 20:17:18.208313 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.209908 kubelet[2780]: W0213 20:17:18.208331 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.209908 kubelet[2780]: E0213 20:17:18.208360 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.209908 kubelet[2780]: E0213 20:17:18.208545 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.209908 kubelet[2780]: E0213 20:17:18.209207 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.210589 kubelet[2780]: E0213 20:17:18.210550 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.210667 kubelet[2780]: W0213 20:17:18.210608 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.210667 kubelet[2780]: E0213 20:17:18.210643 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.211479 kubelet[2780]: E0213 20:17:18.211436 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.211479 kubelet[2780]: W0213 20:17:18.211467 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.212992 kubelet[2780]: E0213 20:17:18.211609 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.215452 kubelet[2780]: E0213 20:17:18.214237 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.215452 kubelet[2780]: W0213 20:17:18.214286 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.215452 kubelet[2780]: E0213 20:17:18.215211 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.215452 kubelet[2780]: E0213 20:17:18.215387 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.217937 kubelet[2780]: W0213 20:17:18.215253 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.218386 kubelet[2780]: E0213 20:17:18.218362 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.218683 kubelet[2780]: E0213 20:17:18.218667 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.218877 kubelet[2780]: W0213 20:17:18.218856 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.219049 kubelet[2780]: E0213 20:17:18.219033 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.219898 kubelet[2780]: E0213 20:17:18.219875 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.220661 kubelet[2780]: W0213 20:17:18.220227 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.220661 kubelet[2780]: E0213 20:17:18.220345 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.221553 kubelet[2780]: E0213 20:17:18.221438 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.221553 kubelet[2780]: W0213 20:17:18.221460 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.221815 kubelet[2780]: E0213 20:17:18.221706 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.222174 kubelet[2780]: E0213 20:17:18.222162 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.222442 kubelet[2780]: W0213 20:17:18.222230 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.222442 kubelet[2780]: E0213 20:17:18.222331 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.222630 kubelet[2780]: E0213 20:17:18.222618 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.222707 kubelet[2780]: W0213 20:17:18.222695 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.222914 kubelet[2780]: E0213 20:17:18.222883 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.223611 kubelet[2780]: E0213 20:17:18.223539 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.223611 kubelet[2780]: W0213 20:17:18.223561 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.223611 kubelet[2780]: E0213 20:17:18.223578 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.285719 kubelet[2780]: E0213 20:17:18.285673 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.286259 kubelet[2780]: W0213 20:17:18.286077 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.286259 kubelet[2780]: E0213 20:17:18.286145 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:18.331937 kubelet[2780]: E0213 20:17:18.331875 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:18.331937 kubelet[2780]: W0213 20:17:18.331916 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:18.331937 kubelet[2780]: E0213 20:17:18.331956 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.709897 kubelet[2780]: E0213 20:17:19.708909 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:20.041148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460226754.mount: Deactivated successfully. Feb 13 20:17:21.658146 systemd[1]: Started sshd@7-137.184.189.10:22-194.0.234.38:52774.service - OpenSSH per-connection server daemon (194.0.234.38:52774). Feb 13 20:17:21.709418 kubelet[2780]: E0213 20:17:21.708706 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:22.177722 containerd[1590]: time="2025-02-13T20:17:22.177643001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:22.180245 containerd[1590]: time="2025-02-13T20:17:22.180122290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:17:22.181917 containerd[1590]: time="2025-02-13T20:17:22.181586160Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:22.185034 containerd[1590]: time="2025-02-13T20:17:22.184967778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:22.188096 containerd[1590]: time="2025-02-13T20:17:22.186937298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.979999666s" Feb 13 20:17:22.188096 containerd[1590]: time="2025-02-13T20:17:22.186994353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:17:22.189628 containerd[1590]: time="2025-02-13T20:17:22.189473398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:17:22.224979 containerd[1590]: time="2025-02-13T20:17:22.224806998Z" level=info msg="CreateContainer within sandbox \"dde0a69fb52dd843d3bcbc8d7f20c51ea887b64596a5f531252754e834405fc7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:17:22.348360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117855183.mount: Deactivated successfully. Feb 13 20:17:22.381132 containerd[1590]: time="2025-02-13T20:17:22.375411545Z" level=info msg="CreateContainer within sandbox \"dde0a69fb52dd843d3bcbc8d7f20c51ea887b64596a5f531252754e834405fc7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a75de734db8abcc93c3b4404ab8e9412c1fa39cc1bea2e05062057093405607a\"" Feb 13 20:17:22.381132 containerd[1590]: time="2025-02-13T20:17:22.378061512Z" level=info msg="StartContainer for \"a75de734db8abcc93c3b4404ab8e9412c1fa39cc1bea2e05062057093405607a\"" Feb 13 20:17:22.597084 containerd[1590]: time="2025-02-13T20:17:22.597019160Z" level=info msg="StartContainer for \"a75de734db8abcc93c3b4404ab8e9412c1fa39cc1bea2e05062057093405607a\" returns successfully" Feb 13 20:17:22.815008 sshd[3322]: Invalid user nutanix from 194.0.234.38 port 52774 Feb 13 20:17:23.018643 sshd[3322]: Connection closed by invalid user nutanix 194.0.234.38 port 52774 [preauth] Feb 13 20:17:23.033818 systemd[1]: sshd@7-137.184.189.10:22-194.0.234.38:52774.service: Deactivated successfully. Feb 13 20:17:23.065126 kubelet[2780]: E0213 20:17:23.062654 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:23.091164 kubelet[2780]: E0213 20:17:23.091107 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.091164 kubelet[2780]: W0213 20:17:23.091142 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.091164 kubelet[2780]: E0213 20:17:23.091168 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.095111 kubelet[2780]: E0213 20:17:23.095051 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.095111 kubelet[2780]: W0213 20:17:23.095096 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.095364 kubelet[2780]: E0213 20:17:23.095132 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.098921 kubelet[2780]: E0213 20:17:23.098866 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.098921 kubelet[2780]: W0213 20:17:23.098910 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.100549 kubelet[2780]: E0213 20:17:23.098945 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.102750 kubelet[2780]: E0213 20:17:23.102645 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.103326 kubelet[2780]: W0213 20:17:23.102685 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.103326 kubelet[2780]: E0213 20:17:23.103030 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.104880 kubelet[2780]: E0213 20:17:23.104745 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.104880 kubelet[2780]: W0213 20:17:23.104777 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.104880 kubelet[2780]: E0213 20:17:23.104813 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.105189 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.106634 kubelet[2780]: W0213 20:17:23.105201 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.105216 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.105956 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.106634 kubelet[2780]: W0213 20:17:23.105977 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.105998 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.106454 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.106634 kubelet[2780]: W0213 20:17:23.106468 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.106634 kubelet[2780]: E0213 20:17:23.106485 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.107020 kubelet[2780]: E0213 20:17:23.106737 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.107020 kubelet[2780]: W0213 20:17:23.106768 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.107020 kubelet[2780]: E0213 20:17:23.106782 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.107157 kubelet[2780]: E0213 20:17:23.107036 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.107157 kubelet[2780]: W0213 20:17:23.107051 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.107157 kubelet[2780]: E0213 20:17:23.107063 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107290 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.111097 kubelet[2780]: W0213 20:17:23.107300 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107311 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107544 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.111097 kubelet[2780]: W0213 20:17:23.107553 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107566 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107926 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.111097 kubelet[2780]: W0213 20:17:23.107939 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.107983 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.111097 kubelet[2780]: E0213 20:17:23.108333 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.112405 kubelet[2780]: W0213 20:17:23.108350 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.112405 kubelet[2780]: E0213 20:17:23.108364 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.112405 kubelet[2780]: E0213 20:17:23.108633 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.112405 kubelet[2780]: W0213 20:17:23.108667 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.112405 kubelet[2780]: E0213 20:17:23.108682 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.126132 kubelet[2780]: I0213 20:17:23.126042 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86664b45bc-drgq8" podStartSLOduration=2.143224622 podStartE2EDuration="6.126013661s" podCreationTimestamp="2025-02-13 20:17:17 +0000 UTC" firstStartedPulling="2025-02-13 20:17:18.20544683 +0000 UTC m=+23.063637412" lastFinishedPulling="2025-02-13 20:17:22.188235853 +0000 UTC m=+27.046426451" observedRunningTime="2025-02-13 20:17:23.123465513 +0000 UTC m=+27.981656116" watchObservedRunningTime="2025-02-13 20:17:23.126013661 +0000 UTC m=+27.984204265" Feb 13 20:17:23.174918 kubelet[2780]: E0213 20:17:23.174865 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.174918 kubelet[2780]: W0213 20:17:23.174910 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.175482 kubelet[2780]: E0213 20:17:23.174946 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.178029 kubelet[2780]: E0213 20:17:23.177861 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.178029 kubelet[2780]: W0213 20:17:23.177942 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.180091 kubelet[2780]: E0213 20:17:23.180013 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.182843 kubelet[2780]: E0213 20:17:23.182765 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.183327 kubelet[2780]: W0213 20:17:23.183069 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.183327 kubelet[2780]: E0213 20:17:23.183128 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.183677 kubelet[2780]: E0213 20:17:23.183563 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.183677 kubelet[2780]: W0213 20:17:23.183578 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.183677 kubelet[2780]: E0213 20:17:23.183592 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.184644 kubelet[2780]: E0213 20:17:23.184546 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.184644 kubelet[2780]: W0213 20:17:23.184563 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.185541 kubelet[2780]: E0213 20:17:23.185403 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.185541 kubelet[2780]: E0213 20:17:23.185457 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.185541 kubelet[2780]: W0213 20:17:23.185475 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.185541 kubelet[2780]: E0213 20:17:23.185539 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.187026 kubelet[2780]: E0213 20:17:23.186912 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.187026 kubelet[2780]: W0213 20:17:23.186930 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.187435 kubelet[2780]: E0213 20:17:23.187157 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.187688 kubelet[2780]: E0213 20:17:23.187642 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.187930 kubelet[2780]: W0213 20:17:23.187758 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.188054 kubelet[2780]: E0213 20:17:23.188035 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.188985 kubelet[2780]: E0213 20:17:23.188955 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.188985 kubelet[2780]: W0213 20:17:23.188979 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.190360 kubelet[2780]: E0213 20:17:23.189724 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.191329 kubelet[2780]: E0213 20:17:23.191293 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.191433 kubelet[2780]: W0213 20:17:23.191329 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.191433 kubelet[2780]: E0213 20:17:23.191367 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.192100 kubelet[2780]: E0213 20:17:23.192075 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.192175 kubelet[2780]: W0213 20:17:23.192101 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.192435 kubelet[2780]: E0213 20:17:23.192316 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.192435 kubelet[2780]: E0213 20:17:23.192408 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.192435 kubelet[2780]: W0213 20:17:23.192420 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.192570 kubelet[2780]: E0213 20:17:23.192519 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.192867 kubelet[2780]: E0213 20:17:23.192809 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.192867 kubelet[2780]: W0213 20:17:23.192824 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.193068 kubelet[2780]: E0213 20:17:23.192973 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.193131 kubelet[2780]: E0213 20:17:23.193113 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.193180 kubelet[2780]: W0213 20:17:23.193129 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.193180 kubelet[2780]: E0213 20:17:23.193152 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.193774 kubelet[2780]: E0213 20:17:23.193577 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.193774 kubelet[2780]: W0213 20:17:23.193596 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.193774 kubelet[2780]: E0213 20:17:23.193615 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.194123 kubelet[2780]: E0213 20:17:23.194108 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.194123 kubelet[2780]: W0213 20:17:23.195873 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.194123 kubelet[2780]: E0213 20:17:23.195931 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.196790 kubelet[2780]: E0213 20:17:23.196773 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.196919 kubelet[2780]: W0213 20:17:23.196905 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.197082 kubelet[2780]: E0213 20:17:23.197045 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.197598 kubelet[2780]: E0213 20:17:23.197579 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:23.197746 kubelet[2780]: W0213 20:17:23.197706 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:23.197878 kubelet[2780]: E0213 20:17:23.197859 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:23.702187 kubelet[2780]: E0213 20:17:23.702106 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:24.066084 kubelet[2780]: I0213 20:17:24.065854 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:24.067055 kubelet[2780]: E0213 20:17:24.066823 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:24.118283 kubelet[2780]: E0213 20:17:24.118233 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.118283 kubelet[2780]: W0213 20:17:24.118269 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.118283 kubelet[2780]: E0213 20:17:24.118298 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.118520 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.119408 kubelet[2780]: W0213 20:17:24.118529 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.118539 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.118751 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.119408 kubelet[2780]: W0213 20:17:24.118764 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.118781 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.119058 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.119408 kubelet[2780]: W0213 20:17:24.119071 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.119085 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.119408 kubelet[2780]: E0213 20:17:24.119313 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.120944 kubelet[2780]: W0213 20:17:24.119324 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.119338 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.119573 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.120944 kubelet[2780]: W0213 20:17:24.119583 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.119598 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.119789 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.120944 kubelet[2780]: W0213 20:17:24.119799 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.119810 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.120944 kubelet[2780]: E0213 20:17:24.120062 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.120944 kubelet[2780]: W0213 20:17:24.120071 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120081 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120264 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.121339 kubelet[2780]: W0213 20:17:24.120271 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120279 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120460 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.121339 kubelet[2780]: W0213 20:17:24.120469 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120481 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120892 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.121339 kubelet[2780]: W0213 20:17:24.120908 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121339 kubelet[2780]: E0213 20:17:24.120924 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121613 kubelet[2780]: E0213 20:17:24.121114 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.121613 kubelet[2780]: W0213 20:17:24.121122 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121613 kubelet[2780]: E0213 20:17:24.121131 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121613 kubelet[2780]: E0213 20:17:24.121302 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.121613 kubelet[2780]: W0213 20:17:24.121312 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.121613 kubelet[2780]: E0213 20:17:24.121423 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.121613 kubelet[2780]: E0213 20:17:24.121615 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.123890 kubelet[2780]: W0213 20:17:24.121623 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.123890 kubelet[2780]: E0213 20:17:24.121632 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.123890 kubelet[2780]: E0213 20:17:24.122249 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.123890 kubelet[2780]: W0213 20:17:24.122285 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.123890 kubelet[2780]: E0213 20:17:24.122303 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.202704 kubelet[2780]: E0213 20:17:24.202593 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.202704 kubelet[2780]: W0213 20:17:24.202639 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.202704 kubelet[2780]: E0213 20:17:24.202673 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.203623 kubelet[2780]: E0213 20:17:24.203597 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.203623 kubelet[2780]: W0213 20:17:24.203622 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.203944 kubelet[2780]: E0213 20:17:24.203651 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.204548 kubelet[2780]: E0213 20:17:24.204400 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.204548 kubelet[2780]: W0213 20:17:24.204428 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.205472 kubelet[2780]: E0213 20:17:24.204525 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.205472 kubelet[2780]: E0213 20:17:24.204948 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.205472 kubelet[2780]: W0213 20:17:24.205214 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.205472 kubelet[2780]: E0213 20:17:24.205250 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.206377 kubelet[2780]: E0213 20:17:24.206346 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.206536 kubelet[2780]: W0213 20:17:24.206377 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.206536 kubelet[2780]: E0213 20:17:24.206417 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.207000 kubelet[2780]: E0213 20:17:24.206963 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.207000 kubelet[2780]: W0213 20:17:24.206989 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.207340 kubelet[2780]: E0213 20:17:24.207258 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.207757 kubelet[2780]: E0213 20:17:24.207729 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.207757 kubelet[2780]: W0213 20:17:24.207754 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.208157 kubelet[2780]: E0213 20:17:24.207971 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.208273 kubelet[2780]: E0213 20:17:24.208212 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.208273 kubelet[2780]: W0213 20:17:24.208227 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.208481 kubelet[2780]: E0213 20:17:24.208323 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.208928 kubelet[2780]: E0213 20:17:24.208792 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.208928 kubelet[2780]: W0213 20:17:24.208810 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.208928 kubelet[2780]: E0213 20:17:24.208870 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.209949 kubelet[2780]: E0213 20:17:24.209915 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.209949 kubelet[2780]: W0213 20:17:24.209948 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.210664 kubelet[2780]: E0213 20:17:24.210095 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.210664 kubelet[2780]: E0213 20:17:24.210379 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.210664 kubelet[2780]: W0213 20:17:24.210394 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.210664 kubelet[2780]: E0213 20:17:24.210421 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.210894 kubelet[2780]: E0213 20:17:24.210686 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.210894 kubelet[2780]: W0213 20:17:24.210700 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.210894 kubelet[2780]: E0213 20:17:24.210746 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.211077 kubelet[2780]: E0213 20:17:24.211055 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.211077 kubelet[2780]: W0213 20:17:24.211077 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.211152 kubelet[2780]: E0213 20:17:24.211103 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.211653 kubelet[2780]: E0213 20:17:24.211613 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.211653 kubelet[2780]: W0213 20:17:24.211635 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.211761 kubelet[2780]: E0213 20:17:24.211677 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.212495 kubelet[2780]: E0213 20:17:24.212327 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.212495 kubelet[2780]: W0213 20:17:24.212350 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.212495 kubelet[2780]: E0213 20:17:24.212388 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.213292 kubelet[2780]: E0213 20:17:24.213123 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.213292 kubelet[2780]: W0213 20:17:24.213148 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.213292 kubelet[2780]: E0213 20:17:24.213173 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.214076 kubelet[2780]: E0213 20:17:24.213924 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.214076 kubelet[2780]: W0213 20:17:24.213943 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.214076 kubelet[2780]: E0213 20:17:24.213955 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.216019 kubelet[2780]: E0213 20:17:24.215917 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:24.216019 kubelet[2780]: W0213 20:17:24.215943 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:24.216019 kubelet[2780]: E0213 20:17:24.215966 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:24.324909 containerd[1590]: time="2025-02-13T20:17:24.324506694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:24.327903 containerd[1590]: time="2025-02-13T20:17:24.326968848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:17:24.332525 containerd[1590]: time="2025-02-13T20:17:24.329039140Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:24.333762 containerd[1590]: time="2025-02-13T20:17:24.333660836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:24.335329 containerd[1590]: time="2025-02-13T20:17:24.335260795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.145726331s" Feb 13 20:17:24.335520 containerd[1590]: time="2025-02-13T20:17:24.335497977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:17:24.342248 containerd[1590]: time="2025-02-13T20:17:24.342164678Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:17:24.403272 containerd[1590]: time="2025-02-13T20:17:24.403175564Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7\"" Feb 13 20:17:24.405911 containerd[1590]: time="2025-02-13T20:17:24.404332497Z" level=info msg="StartContainer for \"587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7\"" Feb 13 20:17:24.483654 systemd[1]: run-containerd-runc-k8s.io-587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7-runc.7Jkg0a.mount: Deactivated successfully. Feb 13 20:17:24.548808 containerd[1590]: time="2025-02-13T20:17:24.548483312Z" level=info msg="StartContainer for \"587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7\" returns successfully" Feb 13 20:17:24.633504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7-rootfs.mount: Deactivated successfully. Feb 13 20:17:24.670862 containerd[1590]: time="2025-02-13T20:17:24.647266663Z" level=info msg="shim disconnected" id=587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7 namespace=k8s.io Feb 13 20:17:24.670862 containerd[1590]: time="2025-02-13T20:17:24.670857857Z" level=warning msg="cleaning up after shim disconnected" id=587c96e8485789ed9856d5bff5533f1269e127f46ee7cfa977ac4a9b69ac21b7 namespace=k8s.io Feb 13 20:17:24.670862 containerd[1590]: time="2025-02-13T20:17:24.670888903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:25.079695 kubelet[2780]: E0213 20:17:25.079452 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:25.083719 containerd[1590]: time="2025-02-13T20:17:25.083291643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:17:25.703011 kubelet[2780]: E0213 20:17:25.701186 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:27.701975 kubelet[2780]: E0213 20:17:27.700794 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:29.701942 kubelet[2780]: E0213 20:17:29.701030 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:31.255888 containerd[1590]: time="2025-02-13T20:17:31.255715162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.262952 containerd[1590]: time="2025-02-13T20:17:31.261033285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:17:31.268900 containerd[1590]: time="2025-02-13T20:17:31.267512645Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.276022 containerd[1590]: time="2025-02-13T20:17:31.275939998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.277527 containerd[1590]: time="2025-02-13T20:17:31.277462429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.194097768s" Feb 13 20:17:31.277527 containerd[1590]: time="2025-02-13T20:17:31.277530392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:17:31.284155 containerd[1590]: time="2025-02-13T20:17:31.283971374Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:17:31.362660 containerd[1590]: time="2025-02-13T20:17:31.362530691Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d\"" Feb 13 20:17:31.367535 containerd[1590]: time="2025-02-13T20:17:31.364787890Z" level=info msg="StartContainer for \"0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d\"" Feb 13 20:17:31.707968 kubelet[2780]: E0213 20:17:31.701520 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:31.802001 containerd[1590]: time="2025-02-13T20:17:31.801925244Z" level=info msg="StartContainer for \"0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d\" returns successfully" Feb 13 20:17:32.116800 kubelet[2780]: E0213 20:17:32.112994 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:33.116979 kubelet[2780]: E0213 20:17:33.116815 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:33.621874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d-rootfs.mount: Deactivated successfully. Feb 13 20:17:33.626742 kubelet[2780]: I0213 20:17:33.625627 2780 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:17:33.635748 containerd[1590]: time="2025-02-13T20:17:33.635497250Z" level=info msg="shim disconnected" id=0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d namespace=k8s.io Feb 13 20:17:33.637496 containerd[1590]: time="2025-02-13T20:17:33.635863193Z" level=warning msg="cleaning up after shim disconnected" id=0909feb107ce4152ec97ea4a8ecc46274f6b33bd0091cad7e909e082e1fa418d namespace=k8s.io Feb 13 20:17:33.637496 containerd[1590]: time="2025-02-13T20:17:33.636026435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:33.747457 containerd[1590]: time="2025-02-13T20:17:33.746755408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s86p,Uid:23f309b9-4a91-4126-8a72-5d65e6b18bef,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:33.761426 kubelet[2780]: I0213 20:17:33.759469 2780 topology_manager.go:215] "Topology Admit Handler" podUID="d1ace4ad-b993-4fad-a1f3-c05836f90411" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8rrs5" Feb 13 20:17:33.761426 kubelet[2780]: I0213 20:17:33.759704 2780 topology_manager.go:215] "Topology Admit Handler" podUID="52ce07cf-126c-4648-b0ce-675124d0c399" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hnks" Feb 13 20:17:33.769740 kubelet[2780]: I0213 20:17:33.769657 2780 topology_manager.go:215] "Topology Admit Handler" podUID="314fcfd7-54bc-4098-8fb4-0a3c2b4eec50" podNamespace="calico-apiserver" podName="calico-apiserver-747d6676bd-qkf2k" Feb 13 20:17:33.781414 kubelet[2780]: I0213 20:17:33.780972 2780 topology_manager.go:215] "Topology Admit Handler" podUID="5345eab2-cc0b-40e4-a4c7-074faddca668" podNamespace="calico-apiserver" podName="calico-apiserver-747d6676bd-nwxvr" Feb 13 20:17:33.790254 kubelet[2780]: I0213 20:17:33.784543 2780 topology_manager.go:215] "Topology Admit Handler" podUID="59fc9a5d-0832-41bf-8c96-780c4d20ba9b" podNamespace="calico-system" podName="calico-kube-controllers-57c6d949f5-dfcnc" Feb 13 20:17:33.854791 kubelet[2780]: I0213 20:17:33.854713 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpj7\" (UniqueName: \"kubernetes.io/projected/59fc9a5d-0832-41bf-8c96-780c4d20ba9b-kube-api-access-nvpj7\") pod \"calico-kube-controllers-57c6d949f5-dfcnc\" (UID: \"59fc9a5d-0832-41bf-8c96-780c4d20ba9b\") " pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" Feb 13 20:17:33.855309 kubelet[2780]: I0213 20:17:33.855276 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/314fcfd7-54bc-4098-8fb4-0a3c2b4eec50-calico-apiserver-certs\") pod \"calico-apiserver-747d6676bd-qkf2k\" (UID: \"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50\") " pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" Feb 13 20:17:33.856622 kubelet[2780]: I0213 20:17:33.856587 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmw9\" (UniqueName: \"kubernetes.io/projected/5345eab2-cc0b-40e4-a4c7-074faddca668-kube-api-access-bhmw9\") pod \"calico-apiserver-747d6676bd-nwxvr\" (UID: \"5345eab2-cc0b-40e4-a4c7-074faddca668\") " pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" Feb 13 20:17:33.856780 kubelet[2780]: I0213 20:17:33.856641 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59fc9a5d-0832-41bf-8c96-780c4d20ba9b-tigera-ca-bundle\") pod \"calico-kube-controllers-57c6d949f5-dfcnc\" (UID: \"59fc9a5d-0832-41bf-8c96-780c4d20ba9b\") " pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" Feb 13 20:17:33.856780 kubelet[2780]: I0213 20:17:33.856678 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszxb\" (UniqueName: \"kubernetes.io/projected/d1ace4ad-b993-4fad-a1f3-c05836f90411-kube-api-access-nszxb\") pod \"coredns-7db6d8ff4d-8rrs5\" (UID: \"d1ace4ad-b993-4fad-a1f3-c05836f90411\") " pod="kube-system/coredns-7db6d8ff4d-8rrs5" Feb 13 20:17:33.856780 kubelet[2780]: I0213 20:17:33.856712 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstcl\" (UniqueName: \"kubernetes.io/projected/52ce07cf-126c-4648-b0ce-675124d0c399-kube-api-access-mstcl\") pod \"coredns-7db6d8ff4d-9hnks\" (UID: \"52ce07cf-126c-4648-b0ce-675124d0c399\") " pod="kube-system/coredns-7db6d8ff4d-9hnks" Feb 13 20:17:33.856780 kubelet[2780]: I0213 20:17:33.856748 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5345eab2-cc0b-40e4-a4c7-074faddca668-calico-apiserver-certs\") pod \"calico-apiserver-747d6676bd-nwxvr\" (UID: \"5345eab2-cc0b-40e4-a4c7-074faddca668\") " pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" Feb 13 20:17:33.857009 kubelet[2780]: I0213 20:17:33.856779 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1ace4ad-b993-4fad-a1f3-c05836f90411-config-volume\") pod \"coredns-7db6d8ff4d-8rrs5\" (UID: \"d1ace4ad-b993-4fad-a1f3-c05836f90411\") " pod="kube-system/coredns-7db6d8ff4d-8rrs5" Feb 13 20:17:33.857009 kubelet[2780]: I0213 20:17:33.856810 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ce07cf-126c-4648-b0ce-675124d0c399-config-volume\") pod \"coredns-7db6d8ff4d-9hnks\" (UID: \"52ce07cf-126c-4648-b0ce-675124d0c399\") " pod="kube-system/coredns-7db6d8ff4d-9hnks" Feb 13 20:17:33.859122 kubelet[2780]: I0213 20:17:33.859068 2780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr58c\" (UniqueName: \"kubernetes.io/projected/314fcfd7-54bc-4098-8fb4-0a3c2b4eec50-kube-api-access-tr58c\") pod \"calico-apiserver-747d6676bd-qkf2k\" (UID: \"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50\") " pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" Feb 13 20:17:34.212054 kubelet[2780]: E0213 20:17:34.211998 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:34.222382 containerd[1590]: time="2025-02-13T20:17:34.221658309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:17:34.440419 kubelet[2780]: E0213 20:17:34.439422 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:34.441197 kubelet[2780]: E0213 20:17:34.439050 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:34.448882 containerd[1590]: time="2025-02-13T20:17:34.448392041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnks,Uid:52ce07cf-126c-4648-b0ce-675124d0c399,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:34.454714 containerd[1590]: time="2025-02-13T20:17:34.454604996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d949f5-dfcnc,Uid:59fc9a5d-0832-41bf-8c96-780c4d20ba9b,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:34.461886 containerd[1590]: time="2025-02-13T20:17:34.461687347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rrs5,Uid:d1ace4ad-b993-4fad-a1f3-c05836f90411,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:34.517298 containerd[1590]: time="2025-02-13T20:17:34.515761278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-qkf2k,Uid:314fcfd7-54bc-4098-8fb4-0a3c2b4eec50,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:17:34.525366 containerd[1590]: time="2025-02-13T20:17:34.525115583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-nwxvr,Uid:5345eab2-cc0b-40e4-a4c7-074faddca668,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:17:34.802296 containerd[1590]: time="2025-02-13T20:17:34.801796088Z" level=error msg="Failed to destroy network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:34.845639 containerd[1590]: time="2025-02-13T20:17:34.840536602Z" level=error msg="encountered an error cleaning up failed sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:34.949652 containerd[1590]: time="2025-02-13T20:17:34.949550462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s86p,Uid:23f309b9-4a91-4126-8a72-5d65e6b18bef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:34.959555 kubelet[2780]: E0213 20:17:34.959166 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:34.959555 kubelet[2780]: E0213 20:17:34.959338 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:34.964075 kubelet[2780]: E0213 20:17:34.961727 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s86p" Feb 13 20:17:34.964075 kubelet[2780]: E0213 20:17:34.962233 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7s86p_calico-system(23f309b9-4a91-4126-8a72-5d65e6b18bef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7s86p_calico-system(23f309b9-4a91-4126-8a72-5d65e6b18bef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:35.045648 containerd[1590]: time="2025-02-13T20:17:35.045410142Z" level=error msg="Failed to destroy network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.046977 containerd[1590]: time="2025-02-13T20:17:35.046604770Z" level=error msg="encountered an error cleaning up failed sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.046977 containerd[1590]: time="2025-02-13T20:17:35.046717348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnks,Uid:52ce07cf-126c-4648-b0ce-675124d0c399,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.047818 kubelet[2780]: E0213 20:17:35.047156 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.047818 kubelet[2780]: E0213 20:17:35.047493 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hnks" Feb 13 20:17:35.047818 kubelet[2780]: E0213 20:17:35.047529 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hnks" Feb 13 20:17:35.050040 kubelet[2780]: E0213 20:17:35.047612 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hnks_kube-system(52ce07cf-126c-4648-b0ce-675124d0c399)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hnks_kube-system(52ce07cf-126c-4648-b0ce-675124d0c399)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hnks" podUID="52ce07cf-126c-4648-b0ce-675124d0c399" Feb 13 20:17:35.253928 kubelet[2780]: I0213 20:17:35.253878 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:35.264868 containerd[1590]: time="2025-02-13T20:17:35.263454559Z" level=error msg="Failed to destroy network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.267777 containerd[1590]: time="2025-02-13T20:17:35.267047758Z" level=error msg="encountered an error cleaning up failed sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.268508 containerd[1590]: time="2025-02-13T20:17:35.268332140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rrs5,Uid:d1ace4ad-b993-4fad-a1f3-c05836f90411,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.287169 kubelet[2780]: E0213 20:17:35.280117 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.287169 kubelet[2780]: E0213 20:17:35.280213 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8rrs5" Feb 13 20:17:35.287169 kubelet[2780]: E0213 20:17:35.280244 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8rrs5" Feb 13 20:17:35.287552 kubelet[2780]: E0213 20:17:35.280299 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8rrs5_kube-system(d1ace4ad-b993-4fad-a1f3-c05836f90411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8rrs5_kube-system(d1ace4ad-b993-4fad-a1f3-c05836f90411)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8rrs5" podUID="d1ace4ad-b993-4fad-a1f3-c05836f90411" Feb 13 20:17:35.288866 containerd[1590]: time="2025-02-13T20:17:35.287088381Z" level=error msg="Failed to destroy network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.293442 containerd[1590]: time="2025-02-13T20:17:35.290133377Z" level=error msg="encountered an error cleaning up failed sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.293442 containerd[1590]: time="2025-02-13T20:17:35.290257418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-qkf2k,Uid:314fcfd7-54bc-4098-8fb4-0a3c2b4eec50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.294062 kubelet[2780]: E0213 20:17:35.291097 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.294062 kubelet[2780]: E0213 20:17:35.291174 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" Feb 13 20:17:35.294062 kubelet[2780]: E0213 20:17:35.291500 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" Feb 13 20:17:35.294266 kubelet[2780]: E0213 20:17:35.291622 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d6676bd-qkf2k_calico-apiserver(314fcfd7-54bc-4098-8fb4-0a3c2b4eec50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d6676bd-qkf2k_calico-apiserver(314fcfd7-54bc-4098-8fb4-0a3c2b4eec50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" podUID="314fcfd7-54bc-4098-8fb4-0a3c2b4eec50" Feb 13 20:17:35.299755 kubelet[2780]: I0213 20:17:35.299617 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:35.304697 containerd[1590]: time="2025-02-13T20:17:35.304612430Z" level=info msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" Feb 13 20:17:35.316201 containerd[1590]: time="2025-02-13T20:17:35.315873255Z" level=info msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" Feb 13 20:17:35.319591 containerd[1590]: time="2025-02-13T20:17:35.318748888Z" level=info msg="Ensure that sandbox a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980 in task-service has been cleanup successfully" Feb 13 20:17:35.326488 containerd[1590]: time="2025-02-13T20:17:35.326405366Z" level=info msg="Ensure that sandbox 696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394 in task-service has been cleanup successfully" Feb 13 20:17:35.328743 containerd[1590]: time="2025-02-13T20:17:35.327449851Z" level=error msg="Failed to destroy network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.331527 containerd[1590]: time="2025-02-13T20:17:35.331345487Z" level=error msg="encountered an error cleaning up failed sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.331527 containerd[1590]: time="2025-02-13T20:17:35.331456530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d949f5-dfcnc,Uid:59fc9a5d-0832-41bf-8c96-780c4d20ba9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.332381 kubelet[2780]: E0213 20:17:35.331989 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.333636 kubelet[2780]: E0213 20:17:35.332685 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" Feb 13 20:17:35.333636 kubelet[2780]: E0213 20:17:35.332768 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" Feb 13 20:17:35.333636 kubelet[2780]: E0213 20:17:35.332907 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57c6d949f5-dfcnc_calico-system(59fc9a5d-0832-41bf-8c96-780c4d20ba9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57c6d949f5-dfcnc_calico-system(59fc9a5d-0832-41bf-8c96-780c4d20ba9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" podUID="59fc9a5d-0832-41bf-8c96-780c4d20ba9b" Feb 13 20:17:35.351229 containerd[1590]: time="2025-02-13T20:17:35.349456351Z" level=error msg="Failed to destroy network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.352076 containerd[1590]: time="2025-02-13T20:17:35.352012092Z" level=error msg="encountered an error cleaning up failed sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.352229 containerd[1590]: time="2025-02-13T20:17:35.352123226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-nwxvr,Uid:5345eab2-cc0b-40e4-a4c7-074faddca668,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.352953 kubelet[2780]: E0213 20:17:35.352717 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.353365 kubelet[2780]: E0213 20:17:35.352918 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" Feb 13 20:17:35.353774 kubelet[2780]: E0213 20:17:35.353577 2780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" Feb 13 20:17:35.355239 kubelet[2780]: E0213 20:17:35.354248 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d6676bd-nwxvr_calico-apiserver(5345eab2-cc0b-40e4-a4c7-074faddca668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d6676bd-nwxvr_calico-apiserver(5345eab2-cc0b-40e4-a4c7-074faddca668)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" podUID="5345eab2-cc0b-40e4-a4c7-074faddca668" Feb 13 20:17:35.438381 containerd[1590]: time="2025-02-13T20:17:35.438281519Z" level=error msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" failed" error="failed to destroy network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.439673 kubelet[2780]: E0213 20:17:35.439337 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:35.439673 kubelet[2780]: E0213 20:17:35.439430 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394"} Feb 13 20:17:35.439673 kubelet[2780]: E0213 20:17:35.439530 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23f309b9-4a91-4126-8a72-5d65e6b18bef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:35.439673 kubelet[2780]: E0213 20:17:35.439567 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23f309b9-4a91-4126-8a72-5d65e6b18bef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7s86p" podUID="23f309b9-4a91-4126-8a72-5d65e6b18bef" Feb 13 20:17:35.467510 containerd[1590]: time="2025-02-13T20:17:35.467422600Z" level=error msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" failed" error="failed to destroy network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:35.468467 kubelet[2780]: E0213 20:17:35.468287 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:35.468659 kubelet[2780]: E0213 20:17:35.468486 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980"} Feb 13 20:17:35.468659 kubelet[2780]: E0213 20:17:35.468546 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52ce07cf-126c-4648-b0ce-675124d0c399\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:35.468659 kubelet[2780]: E0213 20:17:35.468594 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52ce07cf-126c-4648-b0ce-675124d0c399\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hnks" podUID="52ce07cf-126c-4648-b0ce-675124d0c399" Feb 13 20:17:35.623267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a-shm.mount: Deactivated successfully. Feb 13 20:17:35.623598 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810-shm.mount: Deactivated successfully. Feb 13 20:17:35.623886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757-shm.mount: Deactivated successfully. Feb 13 20:17:35.624087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980-shm.mount: Deactivated successfully. Feb 13 20:17:35.624265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394-shm.mount: Deactivated successfully. Feb 13 20:17:36.249751 kubelet[2780]: I0213 20:17:36.245131 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:36.249751 kubelet[2780]: E0213 20:17:36.246678 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:36.368883 kubelet[2780]: E0213 20:17:36.365100 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:36.381705 kubelet[2780]: I0213 20:17:36.374960 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:36.381705 kubelet[2780]: I0213 20:17:36.375017 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:36.381705 kubelet[2780]: I0213 20:17:36.375047 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:36.381705 kubelet[2780]: I0213 20:17:36.375069 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:36.407071 containerd[1590]: time="2025-02-13T20:17:36.406712858Z" level=info msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" Feb 13 20:17:36.414185 containerd[1590]: time="2025-02-13T20:17:36.413920567Z" level=info msg="Ensure that sandbox 43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a in task-service has been cleanup successfully" Feb 13 20:17:36.422185 containerd[1590]: time="2025-02-13T20:17:36.422121067Z" level=info msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" Feb 13 20:17:36.440344 containerd[1590]: time="2025-02-13T20:17:36.439394487Z" level=info msg="Ensure that sandbox 02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf in task-service has been cleanup successfully" Feb 13 20:17:36.443727 containerd[1590]: time="2025-02-13T20:17:36.427660243Z" level=info msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" Feb 13 20:17:36.447961 containerd[1590]: time="2025-02-13T20:17:36.447455380Z" level=info msg="Ensure that sandbox 16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810 in task-service has been cleanup successfully" Feb 13 20:17:36.480286 containerd[1590]: time="2025-02-13T20:17:36.480223359Z" level=info msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" Feb 13 20:17:36.484646 containerd[1590]: time="2025-02-13T20:17:36.484583120Z" level=info msg="Ensure that sandbox 6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757 in task-service has been cleanup successfully" Feb 13 20:17:36.728465 containerd[1590]: time="2025-02-13T20:17:36.727528187Z" level=error msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" failed" error="failed to destroy network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:36.729106 kubelet[2780]: E0213 20:17:36.728188 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:36.729106 kubelet[2780]: E0213 20:17:36.728277 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf"} Feb 13 20:17:36.729106 kubelet[2780]: E0213 20:17:36.728346 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:36.729106 kubelet[2780]: E0213 20:17:36.728395 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" podUID="314fcfd7-54bc-4098-8fb4-0a3c2b4eec50" Feb 13 20:17:36.734151 containerd[1590]: time="2025-02-13T20:17:36.731252242Z" level=error msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" failed" error="failed to destroy network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:36.735529 kubelet[2780]: E0213 20:17:36.733927 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:36.735529 kubelet[2780]: E0213 20:17:36.733997 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810"} Feb 13 20:17:36.735529 kubelet[2780]: E0213 20:17:36.734042 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5345eab2-cc0b-40e4-a4c7-074faddca668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:36.735529 kubelet[2780]: E0213 20:17:36.734079 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5345eab2-cc0b-40e4-a4c7-074faddca668\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" podUID="5345eab2-cc0b-40e4-a4c7-074faddca668" Feb 13 20:17:36.739457 containerd[1590]: time="2025-02-13T20:17:36.739379049Z" level=error msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" failed" error="failed to destroy network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:36.740015 kubelet[2780]: E0213 20:17:36.739942 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:36.740577 kubelet[2780]: E0213 20:17:36.740527 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757"} Feb 13 20:17:36.741727 kubelet[2780]: E0213 20:17:36.741592 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59fc9a5d-0832-41bf-8c96-780c4d20ba9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:36.741727 kubelet[2780]: E0213 20:17:36.741662 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59fc9a5d-0832-41bf-8c96-780c4d20ba9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" podUID="59fc9a5d-0832-41bf-8c96-780c4d20ba9b" Feb 13 20:17:36.764728 containerd[1590]: time="2025-02-13T20:17:36.764638945Z" level=error msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" failed" error="failed to destroy network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:36.765735 kubelet[2780]: E0213 20:17:36.765411 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:36.765735 kubelet[2780]: E0213 20:17:36.765517 2780 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a"} Feb 13 20:17:36.765735 kubelet[2780]: E0213 20:17:36.765577 2780 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1ace4ad-b993-4fad-a1f3-c05836f90411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:17:36.765735 kubelet[2780]: E0213 20:17:36.765622 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1ace4ad-b993-4fad-a1f3-c05836f90411\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8rrs5" podUID="d1ace4ad-b993-4fad-a1f3-c05836f90411" Feb 13 20:17:45.762935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229257037.mount: Deactivated successfully. Feb 13 20:17:46.067792 containerd[1590]: time="2025-02-13T20:17:45.948267501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:17:46.080547 containerd[1590]: time="2025-02-13T20:17:46.080433349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:46.176398 containerd[1590]: time="2025-02-13T20:17:46.175967857Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:46.197934 containerd[1590]: time="2025-02-13T20:17:46.195515529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:46.212247 containerd[1590]: time="2025-02-13T20:17:46.204551155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.975338453s" Feb 13 20:17:46.212247 containerd[1590]: time="2025-02-13T20:17:46.204644526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:17:46.332342 containerd[1590]: time="2025-02-13T20:17:46.332183742Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:17:46.479185 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:17:46.475811 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:17:46.475961 systemd-resolved[1484]: Flushed all caches. Feb 13 20:17:46.535245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168224907.mount: Deactivated successfully. Feb 13 20:17:46.567592 containerd[1590]: time="2025-02-13T20:17:46.567176301Z" level=info msg="CreateContainer within sandbox \"e0372b7009c2288ae246148d67086ab488090ad909e307649c306d88e14f8a6f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7a6af2ae75bad4a4bbd6949c3ba28186901820ec01ad3b8ab89e2e41e050cd9\"" Feb 13 20:17:46.570977 containerd[1590]: time="2025-02-13T20:17:46.568288615Z" level=info msg="StartContainer for \"b7a6af2ae75bad4a4bbd6949c3ba28186901820ec01ad3b8ab89e2e41e050cd9\"" Feb 13 20:17:46.905869 containerd[1590]: time="2025-02-13T20:17:46.905141533Z" level=info msg="StartContainer for \"b7a6af2ae75bad4a4bbd6949c3ba28186901820ec01ad3b8ab89e2e41e050cd9\" returns successfully" Feb 13 20:17:47.084496 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:17:47.091785 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:17:47.456068 kubelet[2780]: E0213 20:17:47.455993 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:47.553250 kubelet[2780]: I0213 20:17:47.546209 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m4gf5" podStartSLOduration=2.519100617 podStartE2EDuration="30.538999649s" podCreationTimestamp="2025-02-13 20:17:17 +0000 UTC" firstStartedPulling="2025-02-13 20:17:18.210431398 +0000 UTC m=+23.068622001" lastFinishedPulling="2025-02-13 20:17:46.230330428 +0000 UTC m=+51.088521033" observedRunningTime="2025-02-13 20:17:47.505069025 +0000 UTC m=+52.363259627" watchObservedRunningTime="2025-02-13 20:17:47.538999649 +0000 UTC m=+52.397190254" Feb 13 20:17:48.523071 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:17:48.525471 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:17:48.523082 systemd-resolved[1484]: Flushed all caches. Feb 13 20:17:48.702569 containerd[1590]: time="2025-02-13T20:17:48.701932955Z" level=info msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.857 [INFO][3958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.860 [INFO][3958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" iface="eth0" netns="/var/run/netns/cni-5f1dc667-9296-b673-6e33-9e762035d25b" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.862 [INFO][3958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" iface="eth0" netns="/var/run/netns/cni-5f1dc667-9296-b673-6e33-9e762035d25b" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.864 [INFO][3958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" iface="eth0" netns="/var/run/netns/cni-5f1dc667-9296-b673-6e33-9e762035d25b" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.864 [INFO][3958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:48.864 [INFO][3958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.109 [INFO][4000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.112 [INFO][4000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.113 [INFO][4000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.156 [WARNING][4000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.156 [INFO][4000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.160 [INFO][4000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:49.175318 containerd[1590]: 2025-02-13 20:17:49.168 [INFO][3958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:49.179279 containerd[1590]: time="2025-02-13T20:17:49.179188139Z" level=info msg="TearDown network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" successfully" Feb 13 20:17:49.179279 containerd[1590]: time="2025-02-13T20:17:49.179279998Z" level=info msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" returns successfully" Feb 13 20:17:49.187556 systemd[1]: run-netns-cni\x2d5f1dc667\x2d9296\x2db673\x2d6e33\x2d9e762035d25b.mount: Deactivated successfully. Feb 13 20:17:49.190608 containerd[1590]: time="2025-02-13T20:17:49.189089828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-qkf2k,Uid:314fcfd7-54bc-4098-8fb4-0a3c2b4eec50,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:17:49.629147 systemd-networkd[1224]: cali9b4c87cf45e: Link UP Feb 13 20:17:49.629481 systemd-networkd[1224]: cali9b4c87cf45e: Gained carrier Feb 13 20:17:49.728850 containerd[1590]: time="2025-02-13T20:17:49.727031683Z" level=info msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.284 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.316 [INFO][4055] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0 calico-apiserver-747d6676bd- calico-apiserver 314fcfd7-54bc-4098-8fb4-0a3c2b4eec50 851 0 2025-02-13 20:17:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d6676bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 calico-apiserver-747d6676bd-qkf2k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b4c87cf45e [] []}} ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.316 [INFO][4055] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.447 [INFO][4067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" HandleID="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.478 [INFO][4067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" HandleID="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"calico-apiserver-747d6676bd-qkf2k", "timestamp":"2025-02-13 20:17:49.447792734 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.478 [INFO][4067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.479 [INFO][4067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.479 [INFO][4067] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.483 [INFO][4067] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.496 [INFO][4067] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.519 [INFO][4067] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.523 [INFO][4067] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.531 [INFO][4067] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.531 [INFO][4067] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.538 [INFO][4067] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.552 [INFO][4067] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.572 [INFO][4067] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.1/26] block=192.168.18.0/26 handle="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.572 [INFO][4067] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.1/26] handle="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.573 [INFO][4067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:49.740876 containerd[1590]: 2025-02-13 20:17:49.573 [INFO][4067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.1/26] IPv6=[] ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" HandleID="k8s-pod-network.56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.583 [INFO][4055] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"calico-apiserver-747d6676bd-qkf2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c87cf45e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.585 [INFO][4055] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.1/32] ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.586 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4c87cf45e ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.649 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.657 [INFO][4055] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa", Pod:"calico-apiserver-747d6676bd-qkf2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c87cf45e", MAC:"aa:87:91:48:18:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:49.757148 containerd[1590]: 2025-02-13 20:17:49.711 [INFO][4055] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-qkf2k" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:49.757148 containerd[1590]: time="2025-02-13T20:17:49.751603707Z" level=info msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" Feb 13 20:17:50.042113 containerd[1590]: time="2025-02-13T20:17:50.038817226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:50.052710 containerd[1590]: time="2025-02-13T20:17:50.042301431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:50.052710 containerd[1590]: time="2025-02-13T20:17:50.042336520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:50.052710 containerd[1590]: time="2025-02-13T20:17:50.042512138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:50.179793 systemd[1]: run-containerd-runc-k8s.io-56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa-runc.0Y9cFt.mount: Deactivated successfully. Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.335 [INFO][4115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.337 [INFO][4115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" iface="eth0" netns="/var/run/netns/cni-710dfe8c-b88f-1c46-bbbb-101f7bd0f260" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.339 [INFO][4115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" iface="eth0" netns="/var/run/netns/cni-710dfe8c-b88f-1c46-bbbb-101f7bd0f260" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.342 [INFO][4115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" iface="eth0" netns="/var/run/netns/cni-710dfe8c-b88f-1c46-bbbb-101f7bd0f260" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.344 [INFO][4115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.344 [INFO][4115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.486 [INFO][4170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.486 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.486 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.515 [WARNING][4170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.516 [INFO][4170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.521 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:50.543567 containerd[1590]: 2025-02-13 20:17:50.527 [INFO][4115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.268 [INFO][4114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.269 [INFO][4114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" iface="eth0" netns="/var/run/netns/cni-27d3e980-8851-f591-20a8-22f068583f37" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.270 [INFO][4114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" iface="eth0" netns="/var/run/netns/cni-27d3e980-8851-f591-20a8-22f068583f37" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.274 [INFO][4114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" iface="eth0" netns="/var/run/netns/cni-27d3e980-8851-f591-20a8-22f068583f37" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.274 [INFO][4114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.274 [INFO][4114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.438 [INFO][4164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.439 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.439 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.466 [WARNING][4164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.466 [INFO][4164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.483 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:50.552213 containerd[1590]: 2025-02-13 20:17:50.515 [INFO][4114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:50.552213 containerd[1590]: time="2025-02-13T20:17:50.551134325Z" level=info msg="TearDown network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" successfully" Feb 13 20:17:50.552213 containerd[1590]: time="2025-02-13T20:17:50.551266444Z" level=info msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" returns successfully" Feb 13 20:17:50.552213 containerd[1590]: time="2025-02-13T20:17:50.551156677Z" level=info msg="TearDown network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" successfully" Feb 13 20:17:50.552213 containerd[1590]: time="2025-02-13T20:17:50.551410002Z" level=info msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" returns successfully" Feb 13 20:17:50.566666 containerd[1590]: time="2025-02-13T20:17:50.560401593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s86p,Uid:23f309b9-4a91-4126-8a72-5d65e6b18bef,Namespace:calico-system,Attempt:1,}" Feb 13 20:17:50.572213 kubelet[2780]: E0213 20:17:50.566872 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:50.568799 systemd[1]: run-netns-cni\x2d710dfe8c\x2db88f\x2d1c46\x2dbbbb\x2d101f7bd0f260.mount: Deactivated successfully. Feb 13 20:17:50.569133 systemd[1]: run-netns-cni\x2d27d3e980\x2d8851\x2df591\x2d20a8\x2d22f068583f37.mount: Deactivated successfully. Feb 13 20:17:50.635168 containerd[1590]: time="2025-02-13T20:17:50.625252100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnks,Uid:52ce07cf-126c-4648-b0ce-675124d0c399,Namespace:kube-system,Attempt:1,}" Feb 13 20:17:50.965251 systemd-networkd[1224]: cali9b4c87cf45e: Gained IPv6LL Feb 13 20:17:51.110803 containerd[1590]: time="2025-02-13T20:17:51.110661460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-qkf2k,Uid:314fcfd7-54bc-4098-8fb4-0a3c2b4eec50,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa\"" Feb 13 20:17:51.111628 kernel: bpftool[4246]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:17:51.219324 containerd[1590]: time="2025-02-13T20:17:51.218869234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:17:51.468340 systemd[1]: Started sshd@8-137.184.189.10:22-147.75.109.163:55348.service - OpenSSH per-connection server daemon (147.75.109.163:55348). Feb 13 20:17:51.603075 systemd-networkd[1224]: cali3d0c60c7a7b: Link UP Feb 13 20:17:51.606761 systemd-networkd[1224]: cali3d0c60c7a7b: Gained carrier Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:50.805 [INFO][4198] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0 csi-node-driver- calico-system 23f309b9-4a91-4126-8a72-5d65e6b18bef 859 0 2025-02-13 20:17:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 csi-node-driver-7s86p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d0c60c7a7b [] []}} ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:50.805 [INFO][4198] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.156 [INFO][4225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" HandleID="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.306 [INFO][4225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" HandleID="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eadb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"csi-node-driver-7s86p", "timestamp":"2025-02-13 20:17:51.156323488 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.306 [INFO][4225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.306 [INFO][4225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.306 [INFO][4225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.369 [INFO][4225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.420 [INFO][4225] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.469 [INFO][4225] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.489 [INFO][4225] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.498 [INFO][4225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.498 [INFO][4225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.508 [INFO][4225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2 Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.533 [INFO][4225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.550 [INFO][4225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.2/26] block=192.168.18.0/26 handle="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.550 [INFO][4225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.2/26] handle="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.550 [INFO][4225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:51.686380 containerd[1590]: 2025-02-13 20:17:51.550 [INFO][4225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.2/26] IPv6=[] ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" HandleID="k8s-pod-network.b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.578 [INFO][4198] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f309b9-4a91-4126-8a72-5d65e6b18bef", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"csi-node-driver-7s86p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0c60c7a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.582 [INFO][4198] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.2/32] ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.583 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d0c60c7a7b ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.610 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.612 [INFO][4198] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f309b9-4a91-4126-8a72-5d65e6b18bef", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2", Pod:"csi-node-driver-7s86p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0c60c7a7b", MAC:"8e:79:ea:4d:f6:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:51.688082 containerd[1590]: 2025-02-13 20:17:51.672 [INFO][4198] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2" Namespace="calico-system" Pod="csi-node-driver-7s86p" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:51.720853 containerd[1590]: time="2025-02-13T20:17:51.720540716Z" level=info msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" Feb 13 20:17:51.722386 containerd[1590]: time="2025-02-13T20:17:51.721988367Z" level=info msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" Feb 13 20:17:51.722562 sshd[4256]: Accepted publickey for core from 147.75.109.163 port 55348 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:51.733307 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:51.773619 systemd-networkd[1224]: cali9b4b27b7a51: Link UP Feb 13 20:17:51.777714 systemd-networkd[1224]: cali9b4b27b7a51: Gained carrier Feb 13 20:17:51.778375 systemd-logind[1558]: New session 8 of user core. Feb 13 20:17:51.787151 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:50.812 [INFO][4200] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0 coredns-7db6d8ff4d- kube-system 52ce07cf-126c-4648-b0ce-675124d0c399 861 0 2025-02-13 20:17:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 coredns-7db6d8ff4d-9hnks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b4b27b7a51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:50.813 [INFO][4200] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.252 [INFO][4229] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" HandleID="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.348 [INFO][4229] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" HandleID="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"coredns-7db6d8ff4d-9hnks", "timestamp":"2025-02-13 20:17:51.252723637 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.348 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.550 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.551 [INFO][4229] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.563 [INFO][4229] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.579 [INFO][4229] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.590 [INFO][4229] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.598 [INFO][4229] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.610 [INFO][4229] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.611 [INFO][4229] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.622 [INFO][4229] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559 Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.644 [INFO][4229] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.672 [INFO][4229] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.3/26] block=192.168.18.0/26 handle="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.674 [INFO][4229] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.3/26] handle="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.676 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:51.955518 containerd[1590]: 2025-02-13 20:17:51.679 [INFO][4229] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.3/26] IPv6=[] ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" HandleID="k8s-pod-network.128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.759 [INFO][4200] cni-plugin/k8s.go 386: Populated endpoint ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"52ce07cf-126c-4648-b0ce-675124d0c399", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"coredns-7db6d8ff4d-9hnks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4b27b7a51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.768 [INFO][4200] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.3/32] ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.769 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4b27b7a51 ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.775 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.780 [INFO][4200] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"52ce07cf-126c-4648-b0ce-675124d0c399", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559", Pod:"coredns-7db6d8ff4d-9hnks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4b27b7a51", MAC:"6e:a7:a1:3b:7d:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:51.956314 containerd[1590]: 2025-02-13 20:17:51.814 [INFO][4200] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnks" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:52.342179 sshd[4256]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:52.350658 containerd[1590]: time="2025-02-13T20:17:52.337316720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:52.350658 containerd[1590]: time="2025-02-13T20:17:52.337455551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:52.350658 containerd[1590]: time="2025-02-13T20:17:52.337473733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:52.350658 containerd[1590]: time="2025-02-13T20:17:52.337709611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:52.351014 systemd[1]: sshd@8-137.184.189.10:22-147.75.109.163:55348.service: Deactivated successfully. Feb 13 20:17:52.361289 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:17:52.371537 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:17:52.374919 systemd-logind[1558]: Removed session 8. Feb 13 20:17:52.519253 containerd[1590]: time="2025-02-13T20:17:52.516241694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:52.519253 containerd[1590]: time="2025-02-13T20:17:52.516367247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:52.519253 containerd[1590]: time="2025-02-13T20:17:52.516464640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:52.535870 containerd[1590]: time="2025-02-13T20:17:52.532268381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:52.719866 containerd[1590]: time="2025-02-13T20:17:52.719786268Z" level=info msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" Feb 13 20:17:52.812341 systemd-networkd[1224]: cali3d0c60c7a7b: Gained IPv6LL Feb 13 20:17:52.825475 containerd[1590]: time="2025-02-13T20:17:52.824634355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s86p,Uid:23f309b9-4a91-4126-8a72-5d65e6b18bef,Namespace:calico-system,Attempt:1,} returns sandbox id \"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2\"" Feb 13 20:17:52.876469 systemd-networkd[1224]: cali9b4b27b7a51: Gained IPv6LL Feb 13 20:17:52.907109 containerd[1590]: time="2025-02-13T20:17:52.906953603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnks,Uid:52ce07cf-126c-4648-b0ce-675124d0c399,Namespace:kube-system,Attempt:1,} returns sandbox id \"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559\"" Feb 13 20:17:52.909938 kubelet[2780]: E0213 20:17:52.909571 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:52.932728 containerd[1590]: time="2025-02-13T20:17:52.932126938Z" level=info msg="CreateContainer within sandbox \"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.600 [INFO][4322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.601 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" iface="eth0" netns="/var/run/netns/cni-4736d483-8f4c-b5af-7ed9-097a9b3bffb8" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.601 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" iface="eth0" netns="/var/run/netns/cni-4736d483-8f4c-b5af-7ed9-097a9b3bffb8" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.602 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" iface="eth0" netns="/var/run/netns/cni-4736d483-8f4c-b5af-7ed9-097a9b3bffb8" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.602 [INFO][4322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.602 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.893 [INFO][4397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.893 [INFO][4397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.893 [INFO][4397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.918 [WARNING][4397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.918 [INFO][4397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.925 [INFO][4397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:52.953355 containerd[1590]: 2025-02-13 20:17:52.942 [INFO][4322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:17:52.961741 containerd[1590]: time="2025-02-13T20:17:52.955167678Z" level=info msg="TearDown network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" successfully" Feb 13 20:17:52.961741 containerd[1590]: time="2025-02-13T20:17:52.955223668Z" level=info msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" returns successfully" Feb 13 20:17:52.970571 systemd[1]: run-netns-cni\x2d4736d483\x2d8f4c\x2db5af\x2d7ed9\x2d097a9b3bffb8.mount: Deactivated successfully. Feb 13 20:17:52.978771 containerd[1590]: time="2025-02-13T20:17:52.977656815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d949f5-dfcnc,Uid:59fc9a5d-0832-41bf-8c96-780c4d20ba9b,Namespace:calico-system,Attempt:1,}" Feb 13 20:17:52.985183 containerd[1590]: time="2025-02-13T20:17:52.983319506Z" level=info msg="CreateContainer within sandbox \"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ec6fde8caa3fb20c6c718f1cba939bcbae4fad9491cdfa7ae820cba58cb5960\"" Feb 13 20:17:52.987856 containerd[1590]: time="2025-02-13T20:17:52.987650386Z" level=info msg="StartContainer for \"5ec6fde8caa3fb20c6c718f1cba939bcbae4fad9491cdfa7ae820cba58cb5960\"" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.707 [INFO][4304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.708 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" iface="eth0" netns="/var/run/netns/cni-903b2819-5d72-577b-1b9e-a2b2164fd2aa" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.708 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" iface="eth0" netns="/var/run/netns/cni-903b2819-5d72-577b-1b9e-a2b2164fd2aa" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.708 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" iface="eth0" netns="/var/run/netns/cni-903b2819-5d72-577b-1b9e-a2b2164fd2aa" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.709 [INFO][4304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.709 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.984 [INFO][4410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.984 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:52.984 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:53.018 [WARNING][4410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:53.018 [INFO][4410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:53.021 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:53.051449 containerd[1590]: 2025-02-13 20:17:53.033 [INFO][4304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:17:53.053265 containerd[1590]: time="2025-02-13T20:17:53.052078388Z" level=info msg="TearDown network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" successfully" Feb 13 20:17:53.054590 containerd[1590]: time="2025-02-13T20:17:53.052129133Z" level=info msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" returns successfully" Feb 13 20:17:53.057599 containerd[1590]: time="2025-02-13T20:17:53.057539566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-nwxvr,Uid:5345eab2-cc0b-40e4-a4c7-074faddca668,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:17:53.210400 systemd-networkd[1224]: vxlan.calico: Link UP Feb 13 20:17:53.210412 systemd-networkd[1224]: vxlan.calico: Gained carrier Feb 13 20:17:53.416783 systemd[1]: run-netns-cni\x2d903b2819\x2d5d72\x2d577b\x2d1b9e\x2da2b2164fd2aa.mount: Deactivated successfully. Feb 13 20:17:53.479705 containerd[1590]: time="2025-02-13T20:17:53.479622273Z" level=info msg="StartContainer for \"5ec6fde8caa3fb20c6c718f1cba939bcbae4fad9491cdfa7ae820cba58cb5960\" returns successfully" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.082 [INFO][4435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.082 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" iface="eth0" netns="/var/run/netns/cni-f87e5a23-8c7a-198c-ba82-806e45af6b86" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.082 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" iface="eth0" netns="/var/run/netns/cni-f87e5a23-8c7a-198c-ba82-806e45af6b86" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.083 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" iface="eth0" netns="/var/run/netns/cni-f87e5a23-8c7a-198c-ba82-806e45af6b86" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.083 [INFO][4435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.083 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.476 [INFO][4484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.485 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.485 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.516 [WARNING][4484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.517 [INFO][4484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.525 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:53.622915 containerd[1590]: 2025-02-13 20:17:53.553 [INFO][4435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:53.622915 containerd[1590]: time="2025-02-13T20:17:53.621537002Z" level=info msg="TearDown network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" successfully" Feb 13 20:17:53.622915 containerd[1590]: time="2025-02-13T20:17:53.621593326Z" level=info msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" returns successfully" Feb 13 20:17:53.637702 systemd[1]: run-netns-cni\x2df87e5a23\x2d8c7a\x2d198c\x2dba82\x2d806e45af6b86.mount: Deactivated successfully. Feb 13 20:17:53.646272 kubelet[2780]: E0213 20:17:53.644969 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:53.678193 containerd[1590]: time="2025-02-13T20:17:53.675892822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rrs5,Uid:d1ace4ad-b993-4fad-a1f3-c05836f90411,Namespace:kube-system,Attempt:1,}" Feb 13 20:17:53.696650 kubelet[2780]: E0213 20:17:53.690914 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:53.911313 kubelet[2780]: I0213 20:17:53.911227 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hnks" podStartSLOduration=46.911196154 podStartE2EDuration="46.911196154s" podCreationTimestamp="2025-02-13 20:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:53.909928937 +0000 UTC m=+58.768119546" watchObservedRunningTime="2025-02-13 20:17:53.911196154 +0000 UTC m=+58.769386754" Feb 13 20:17:54.269481 systemd-networkd[1224]: calif8f7dc15941: Link UP Feb 13 20:17:54.276027 systemd-networkd[1224]: calif8f7dc15941: Gained carrier Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:53.551 [INFO][4483] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0 calico-kube-controllers-57c6d949f5- calico-system 59fc9a5d-0832-41bf-8c96-780c4d20ba9b 917 0 2025-02-13 20:17:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57c6d949f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 calico-kube-controllers-57c6d949f5-dfcnc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif8f7dc15941 [] []}} ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:53.560 [INFO][4483] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.050 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" HandleID="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.074 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" HandleID="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"calico-kube-controllers-57c6d949f5-dfcnc", "timestamp":"2025-02-13 20:17:54.050609077 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.075 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.075 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.076 [INFO][4560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.085 [INFO][4560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.100 [INFO][4560] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.130 [INFO][4560] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.152 [INFO][4560] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.174 [INFO][4560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.175 [INFO][4560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.180 [INFO][4560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.193 [INFO][4560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.226 [INFO][4560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.4/26] block=192.168.18.0/26 handle="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.226 [INFO][4560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.4/26] handle="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.226 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:54.397878 containerd[1590]: 2025-02-13 20:17:54.226 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.4/26] IPv6=[] ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" HandleID="k8s-pod-network.ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.246 [INFO][4483] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0", GenerateName:"calico-kube-controllers-57c6d949f5-", Namespace:"calico-system", SelfLink:"", UID:"59fc9a5d-0832-41bf-8c96-780c4d20ba9b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d949f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"calico-kube-controllers-57c6d949f5-dfcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8f7dc15941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.247 [INFO][4483] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.4/32] ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.248 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8f7dc15941 ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.296 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.297 [INFO][4483] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0", GenerateName:"calico-kube-controllers-57c6d949f5-", Namespace:"calico-system", SelfLink:"", UID:"59fc9a5d-0832-41bf-8c96-780c4d20ba9b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d949f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc", Pod:"calico-kube-controllers-57c6d949f5-dfcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8f7dc15941", MAC:"fa:65:21:23:6c:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:54.402078 containerd[1590]: 2025-02-13 20:17:54.355 [INFO][4483] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc" Namespace="calico-system" Pod="calico-kube-controllers-57c6d949f5-dfcnc" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:17:54.557304 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:17:54.536176 systemd-networkd[1224]: cali69375d9ca9f: Link UP Feb 13 20:17:54.548746 systemd-networkd[1224]: cali69375d9ca9f: Gained carrier Feb 13 20:17:54.559738 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:17:54.559794 systemd-resolved[1484]: Flushed all caches. Feb 13 20:17:54.605062 systemd-networkd[1224]: vxlan.calico: Gained IPv6LL Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:53.806 [INFO][4501] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0 calico-apiserver-747d6676bd- calico-apiserver 5345eab2-cc0b-40e4-a4c7-074faddca668 919 0 2025-02-13 20:17:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d6676bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 calico-apiserver-747d6676bd-nwxvr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69375d9ca9f [] []}} ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:53.806 [INFO][4501] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.080 [INFO][4571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" HandleID="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.108 [INFO][4571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" HandleID="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8de0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"calico-apiserver-747d6676bd-nwxvr", "timestamp":"2025-02-13 20:17:54.080489877 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.109 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.228 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.229 [INFO][4571] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.239 [INFO][4571] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.251 [INFO][4571] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.288 [INFO][4571] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.346 [INFO][4571] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.378 [INFO][4571] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.379 [INFO][4571] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.400 [INFO][4571] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453 Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.425 [INFO][4571] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.478 [INFO][4571] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.5/26] block=192.168.18.0/26 handle="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.479 [INFO][4571] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.5/26] handle="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.480 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:54.650172 containerd[1590]: 2025-02-13 20:17:54.481 [INFO][4571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.5/26] IPv6=[] ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" HandleID="k8s-pod-network.f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.511 [INFO][4501] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5345eab2-cc0b-40e4-a4c7-074faddca668", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"calico-apiserver-747d6676bd-nwxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69375d9ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.511 [INFO][4501] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.5/32] ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.511 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69375d9ca9f ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.556 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.563 [INFO][4501] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5345eab2-cc0b-40e4-a4c7-074faddca668", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453", Pod:"calico-apiserver-747d6676bd-nwxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69375d9ca9f", MAC:"1a:12:43:dc:10:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:54.659317 containerd[1590]: 2025-02-13 20:17:54.600 [INFO][4501] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453" Namespace="calico-apiserver" Pod="calico-apiserver-747d6676bd-nwxvr" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:17:54.786563 kubelet[2780]: E0213 20:17:54.783583 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:54.891379 systemd-networkd[1224]: cali881f9c65399: Link UP Feb 13 20:17:54.902016 containerd[1590]: time="2025-02-13T20:17:54.878113557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:54.891703 systemd-networkd[1224]: cali881f9c65399: Gained carrier Feb 13 20:17:54.927675 containerd[1590]: time="2025-02-13T20:17:54.925026521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:54.927675 containerd[1590]: time="2025-02-13T20:17:54.925105683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:54.927675 containerd[1590]: time="2025-02-13T20:17:54.925368165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.167 [INFO][4558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0 coredns-7db6d8ff4d- kube-system d1ace4ad-b993-4fad-a1f3-c05836f90411 925 0 2025-02-13 20:17:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-d-1eeb8951e4 coredns-7db6d8ff4d-8rrs5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali881f9c65399 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.167 [INFO][4558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.459 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" HandleID="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.582 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" HandleID="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-d-1eeb8951e4", "pod":"coredns-7db6d8ff4d-8rrs5", "timestamp":"2025-02-13 20:17:54.450926742 +0000 UTC"}, Hostname:"ci-4081.3.1-d-1eeb8951e4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.597 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.599 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.601 [INFO][4589] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-d-1eeb8951e4' Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.612 [INFO][4589] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.630 [INFO][4589] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.658 [INFO][4589] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.678 [INFO][4589] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.724 [INFO][4589] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.725 [INFO][4589] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.731 [INFO][4589] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4 Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.749 [INFO][4589] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.782 [INFO][4589] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.6/26] block=192.168.18.0/26 handle="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.782 [INFO][4589] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.6/26] handle="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" host="ci-4081.3.1-d-1eeb8951e4" Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.782 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:55.032094 containerd[1590]: 2025-02-13 20:17:54.782 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.6/26] IPv6=[] ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" HandleID="k8s-pod-network.ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.884 [INFO][4558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d1ace4ad-b993-4fad-a1f3-c05836f90411", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"", Pod:"coredns-7db6d8ff4d-8rrs5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali881f9c65399", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.885 [INFO][4558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.6/32] ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.885 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali881f9c65399 ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.902 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.905 [INFO][4558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d1ace4ad-b993-4fad-a1f3-c05836f90411", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4", Pod:"coredns-7db6d8ff4d-8rrs5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali881f9c65399", MAC:"2e:cf:e9:10:22:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:55.038097 containerd[1590]: 2025-02-13 20:17:54.969 [INFO][4558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rrs5" WorkloadEndpoint="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:55.099302 containerd[1590]: time="2025-02-13T20:17:55.093300810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:55.099302 containerd[1590]: time="2025-02-13T20:17:55.094463658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:55.099302 containerd[1590]: time="2025-02-13T20:17:55.094492061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:55.099302 containerd[1590]: time="2025-02-13T20:17:55.095288747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:55.191448 systemd[1]: run-containerd-runc-k8s.io-f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453-runc.f0oNQa.mount: Deactivated successfully. Feb 13 20:17:55.304491 containerd[1590]: time="2025-02-13T20:17:55.304281597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:55.307271 containerd[1590]: time="2025-02-13T20:17:55.306907005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:55.308151 containerd[1590]: time="2025-02-13T20:17:55.307791417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:55.309136 containerd[1590]: time="2025-02-13T20:17:55.309039819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:55.335999 containerd[1590]: time="2025-02-13T20:17:55.335423487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d949f5-dfcnc,Uid:59fc9a5d-0832-41bf-8c96-780c4d20ba9b,Namespace:calico-system,Attempt:1,} returns sandbox id \"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc\"" Feb 13 20:17:55.366461 containerd[1590]: time="2025-02-13T20:17:55.366282179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6676bd-nwxvr,Uid:5345eab2-cc0b-40e4-a4c7-074faddca668,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453\"" Feb 13 20:17:55.565167 systemd-networkd[1224]: calif8f7dc15941: Gained IPv6LL Feb 13 20:17:55.678818 containerd[1590]: time="2025-02-13T20:17:55.678595474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rrs5,Uid:d1ace4ad-b993-4fad-a1f3-c05836f90411,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4\"" Feb 13 20:17:55.751779 kubelet[2780]: E0213 20:17:55.750725 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:55.816633 containerd[1590]: time="2025-02-13T20:17:55.815029136Z" level=info msg="CreateContainer within sandbox \"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:55.834384 containerd[1590]: time="2025-02-13T20:17:55.833777006Z" level=info msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" Feb 13 20:17:55.889881 containerd[1590]: time="2025-02-13T20:17:55.889402639Z" level=info msg="CreateContainer within sandbox \"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f8bea6409b31d7083fb513712a721f725f85bb5f4a3f8f4ffdeb09c163334ce\"" Feb 13 20:17:55.895872 containerd[1590]: time="2025-02-13T20:17:55.895385234Z" level=info msg="StartContainer for \"2f8bea6409b31d7083fb513712a721f725f85bb5f4a3f8f4ffdeb09c163334ce\"" Feb 13 20:17:55.923776 kubelet[2780]: E0213 20:17:55.921205 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:56.012297 systemd-networkd[1224]: cali69375d9ca9f: Gained IPv6LL Feb 13 20:17:56.272287 systemd-networkd[1224]: cali881f9c65399: Gained IPv6LL Feb 13 20:17:56.362404 containerd[1590]: time="2025-02-13T20:17:56.362303094Z" level=info msg="StartContainer for \"2f8bea6409b31d7083fb513712a721f725f85bb5f4a3f8f4ffdeb09c163334ce\" returns successfully" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.332 [WARNING][4806] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"52ce07cf-126c-4648-b0ce-675124d0c399", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559", Pod:"coredns-7db6d8ff4d-9hnks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4b27b7a51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.332 [INFO][4806] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.332 [INFO][4806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" iface="eth0" netns="" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.333 [INFO][4806] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.333 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.477 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.478 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.478 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.491 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.492 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.500 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:56.532744 containerd[1590]: 2025-02-13 20:17:56.508 [INFO][4806] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:56.532744 containerd[1590]: time="2025-02-13T20:17:56.530187037Z" level=info msg="TearDown network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" successfully" Feb 13 20:17:56.532744 containerd[1590]: time="2025-02-13T20:17:56.530226142Z" level=info msg="StopPodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" returns successfully" Feb 13 20:17:56.553148 containerd[1590]: time="2025-02-13T20:17:56.553070418Z" level=info msg="RemovePodSandbox for \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" Feb 13 20:17:56.559093 containerd[1590]: time="2025-02-13T20:17:56.558971477Z" level=info msg="Forcibly stopping sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\"" Feb 13 20:17:56.931558 kubelet[2780]: E0213 20:17:56.931508 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.763 [WARNING][4866] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"52ce07cf-126c-4648-b0ce-675124d0c399", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"128c847565db70a580974aa2a15c2f76bc0a30b5c6c09ca3c5f761f101c17559", Pod:"coredns-7db6d8ff4d-9hnks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4b27b7a51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.775 [INFO][4866] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.775 [INFO][4866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" iface="eth0" netns="" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.775 [INFO][4866] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.775 [INFO][4866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.970 [INFO][4872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.970 [INFO][4872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.970 [INFO][4872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.993 [WARNING][4872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:56.994 [INFO][4872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" HandleID="k8s-pod-network.a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--9hnks-eth0" Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:57.004 [INFO][4872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:57.066616 containerd[1590]: 2025-02-13 20:17:57.026 [INFO][4866] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980" Feb 13 20:17:57.074672 containerd[1590]: time="2025-02-13T20:17:57.070446086Z" level=info msg="TearDown network for sandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" successfully" Feb 13 20:17:57.133259 containerd[1590]: time="2025-02-13T20:17:57.129139955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:17:57.133259 containerd[1590]: time="2025-02-13T20:17:57.130016829Z" level=info msg="RemovePodSandbox \"a406673dd7deede6e3bba7f6936e8566e27b45a71df3defd63e6e66981ffd980\" returns successfully" Feb 13 20:17:57.152539 containerd[1590]: time="2025-02-13T20:17:57.151459397Z" level=info msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" Feb 13 20:17:57.154624 kubelet[2780]: I0213 20:17:57.154510 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8rrs5" podStartSLOduration=50.154469573 podStartE2EDuration="50.154469573s" podCreationTimestamp="2025-02-13 20:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:56.991876942 +0000 UTC m=+61.850067552" watchObservedRunningTime="2025-02-13 20:17:57.154469573 +0000 UTC m=+62.012660186" Feb 13 20:17:57.365875 kubelet[2780]: E0213 20:17:57.363228 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:57.368073 systemd[1]: Started sshd@9-137.184.189.10:22-147.75.109.163:55358.service - OpenSSH per-connection server daemon (147.75.109.163:55358). Feb 13 20:17:57.746103 systemd[1]: run-containerd-runc-k8s.io-b7a6af2ae75bad4a4bbd6949c3ba28186901820ec01ad3b8ab89e2e41e050cd9-runc.L73aQw.mount: Deactivated successfully. Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.299 [WARNING][4895] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d1ace4ad-b993-4fad-a1f3-c05836f90411", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4", Pod:"coredns-7db6d8ff4d-8rrs5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali881f9c65399", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.299 [INFO][4895] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.299 [INFO][4895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" iface="eth0" netns="" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.300 [INFO][4895] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.300 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.591 [INFO][4901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.591 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.591 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.668 [WARNING][4901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.669 [INFO][4901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.703 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:57.772214 containerd[1590]: 2025-02-13 20:17:57.763 [INFO][4895] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:57.785250 containerd[1590]: time="2025-02-13T20:17:57.773379383Z" level=info msg="TearDown network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" successfully" Feb 13 20:17:57.785250 containerd[1590]: time="2025-02-13T20:17:57.773436743Z" level=info msg="StopPodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" returns successfully" Feb 13 20:17:57.794868 containerd[1590]: time="2025-02-13T20:17:57.788535157Z" level=info msg="RemovePodSandbox for \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" Feb 13 20:17:57.794868 containerd[1590]: time="2025-02-13T20:17:57.788621004Z" level=info msg="Forcibly stopping sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\"" Feb 13 20:17:57.837863 sshd[4905]: Accepted publickey for core from 147.75.109.163 port 55358 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:57.859400 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:57.896823 systemd-logind[1558]: New session 9 of user core. Feb 13 20:17:57.905142 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:17:57.972457 kubelet[2780]: E0213 20:17:57.971965 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:58.000626 kubelet[2780]: E0213 20:17:57.994461 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.338 [WARNING][4942] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d1ace4ad-b993-4fad-a1f3-c05836f90411", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ef40c3150a87a27adf5d42027b0064ef1a56c5670699b91d8470f92e16669cc4", Pod:"coredns-7db6d8ff4d-8rrs5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali881f9c65399", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.340 [INFO][4942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.340 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" iface="eth0" netns="" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.340 [INFO][4942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.340 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.519 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.519 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.519 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.541 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.541 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" HandleID="k8s-pod-network.43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-coredns--7db6d8ff4d--8rrs5-eth0" Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.550 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:58.584941 containerd[1590]: 2025-02-13 20:17:58.570 [INFO][4942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a" Feb 13 20:17:58.590359 containerd[1590]: time="2025-02-13T20:17:58.584963265Z" level=info msg="TearDown network for sandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" successfully" Feb 13 20:17:58.606866 containerd[1590]: time="2025-02-13T20:17:58.605577162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:17:58.606866 containerd[1590]: time="2025-02-13T20:17:58.605704278Z" level=info msg="RemovePodSandbox \"43d27e564c05cd4f2ba961721abc5bea5bd71a48c0a9b00469ec2691dc1a566a\" returns successfully" Feb 13 20:17:58.607221 containerd[1590]: time="2025-02-13T20:17:58.606912207Z" level=info msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" Feb 13 20:17:58.642341 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:17:58.635384 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:17:58.635396 systemd-resolved[1484]: Flushed all caches. Feb 13 20:17:58.766186 sshd[4905]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:58.783597 systemd[1]: sshd@9-137.184.189.10:22-147.75.109.163:55358.service: Deactivated successfully. Feb 13 20:17:58.802599 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:17:58.811956 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:17:58.815650 systemd-logind[1558]: Removed session 9. Feb 13 20:17:59.003024 kubelet[2780]: E0213 20:17:58.996608 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.834 [WARNING][5001] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f309b9-4a91-4126-8a72-5d65e6b18bef", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2", Pod:"csi-node-driver-7s86p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0c60c7a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.835 [INFO][5001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.835 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" iface="eth0" netns="" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.835 [INFO][5001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.835 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.927 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.928 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.928 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.972 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.972 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:58.995 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:59.033433 containerd[1590]: 2025-02-13 20:17:59.005 [INFO][5001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.036577 containerd[1590]: time="2025-02-13T20:17:59.035755827Z" level=info msg="TearDown network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" successfully" Feb 13 20:17:59.036577 containerd[1590]: time="2025-02-13T20:17:59.035820034Z" level=info msg="StopPodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" returns successfully" Feb 13 20:17:59.038935 containerd[1590]: time="2025-02-13T20:17:59.038322666Z" level=info msg="RemovePodSandbox for \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" Feb 13 20:17:59.038935 containerd[1590]: time="2025-02-13T20:17:59.038396508Z" level=info msg="Forcibly stopping sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\"" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.194 [WARNING][5029] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f309b9-4a91-4126-8a72-5d65e6b18bef", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2", Pod:"csi-node-driver-7s86p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0c60c7a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.194 [INFO][5029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.194 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" iface="eth0" netns="" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.194 [INFO][5029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.194 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.266 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.268 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.269 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.306 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.307 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" HandleID="k8s-pod-network.696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-csi--node--driver--7s86p-eth0" Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.321 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:59.329052 containerd[1590]: 2025-02-13 20:17:59.324 [INFO][5029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394" Feb 13 20:17:59.330163 containerd[1590]: time="2025-02-13T20:17:59.330025520Z" level=info msg="TearDown network for sandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" successfully" Feb 13 20:17:59.338587 containerd[1590]: time="2025-02-13T20:17:59.338179659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:17:59.338587 containerd[1590]: time="2025-02-13T20:17:59.338310203Z" level=info msg="RemovePodSandbox \"696641b91bd92aad4e49b5d11c2f5a9d2099e42488a8b0faa58c92f98c62b394\" returns successfully" Feb 13 20:17:59.339570 containerd[1590]: time="2025-02-13T20:17:59.339531231Z" level=info msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.475 [WARNING][5053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa", Pod:"calico-apiserver-747d6676bd-qkf2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c87cf45e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.475 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.475 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" iface="eth0" netns="" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.475 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.475 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.564 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.564 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.564 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.596 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.597 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.602 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:59.613540 containerd[1590]: 2025-02-13 20:17:59.609 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:17:59.616260 containerd[1590]: time="2025-02-13T20:17:59.614336936Z" level=info msg="TearDown network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" successfully" Feb 13 20:17:59.616260 containerd[1590]: time="2025-02-13T20:17:59.614383784Z" level=info msg="StopPodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" returns successfully" Feb 13 20:17:59.618069 containerd[1590]: time="2025-02-13T20:17:59.617534750Z" level=info msg="RemovePodSandbox for \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" Feb 13 20:17:59.618502 containerd[1590]: time="2025-02-13T20:17:59.618175172Z" level=info msg="Forcibly stopping sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\"" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.812 [WARNING][5077] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"314fcfd7-54bc-4098-8fb4-0a3c2b4eec50", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa", Pod:"calico-apiserver-747d6676bd-qkf2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c87cf45e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.814 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.814 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" iface="eth0" netns="" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.814 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.814 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.976 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.976 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.976 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.995 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:17:59.996 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" HandleID="k8s-pod-network.02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--qkf2k-eth0" Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:18:00.005 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:18:00.031890 containerd[1590]: 2025-02-13 20:18:00.015 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf" Feb 13 20:18:00.031890 containerd[1590]: time="2025-02-13T20:18:00.025892110Z" level=info msg="TearDown network for sandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" successfully" Feb 13 20:18:00.046600 containerd[1590]: time="2025-02-13T20:18:00.043008938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.046600 containerd[1590]: time="2025-02-13T20:18:00.043135853Z" level=info msg="RemovePodSandbox \"02b6681383cbe001a0ad0afe8ed2525d903cf1c818eef4e66d824c3eed973acf\" returns successfully" Feb 13 20:18:00.048005 containerd[1590]: time="2025-02-13T20:18:00.047487470Z" level=info msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.271 [WARNING][5110] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0", GenerateName:"calico-kube-controllers-57c6d949f5-", Namespace:"calico-system", SelfLink:"", UID:"59fc9a5d-0832-41bf-8c96-780c4d20ba9b", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d949f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc", Pod:"calico-kube-controllers-57c6d949f5-dfcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8f7dc15941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.271 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.271 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" iface="eth0" netns="" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.271 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.271 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.387 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.388 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.388 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.404 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.404 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.413 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:18:00.423499 containerd[1590]: 2025-02-13 20:18:00.419 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.425556 containerd[1590]: time="2025-02-13T20:18:00.424185365Z" level=info msg="TearDown network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" successfully" Feb 13 20:18:00.425556 containerd[1590]: time="2025-02-13T20:18:00.424422363Z" level=info msg="StopPodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" returns successfully" Feb 13 20:18:00.434233 containerd[1590]: time="2025-02-13T20:18:00.433612477Z" level=info msg="RemovePodSandbox for \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" Feb 13 20:18:00.435848 containerd[1590]: time="2025-02-13T20:18:00.435661232Z" level=info msg="Forcibly stopping sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\"" Feb 13 20:18:00.689984 containerd[1590]: time="2025-02-13T20:18:00.688658684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:00.694877 containerd[1590]: time="2025-02-13T20:18:00.694188771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:18:00.696327 containerd[1590]: time="2025-02-13T20:18:00.696261390Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:00.707451 containerd[1590]: time="2025-02-13T20:18:00.707201765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:00.710939 containerd[1590]: time="2025-02-13T20:18:00.710630999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 9.491635822s" Feb 13 20:18:00.710939 containerd[1590]: time="2025-02-13T20:18:00.710711733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:18:00.718456 containerd[1590]: time="2025-02-13T20:18:00.713875885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:18:00.726288 containerd[1590]: time="2025-02-13T20:18:00.726119279Z" level=info msg="CreateContainer within sandbox \"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:18:00.768368 containerd[1590]: time="2025-02-13T20:18:00.768253726Z" level=info msg="CreateContainer within sandbox \"56b0e229ac63e856dc1c524e3d718ed651d0e20a628aa86a76767ed0e57fbfaa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d9e63cca5460b59a526f2b37b640b4c4e508806e105cf316a937b042c28bb04\"" Feb 13 20:18:00.776068 containerd[1590]: time="2025-02-13T20:18:00.775296920Z" level=info msg="StartContainer for \"1d9e63cca5460b59a526f2b37b640b4c4e508806e105cf316a937b042c28bb04\"" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.673 [WARNING][5134] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0", GenerateName:"calico-kube-controllers-57c6d949f5-", Namespace:"calico-system", SelfLink:"", UID:"59fc9a5d-0832-41bf-8c96-780c4d20ba9b", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d949f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc", Pod:"calico-kube-controllers-57c6d949f5-dfcnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif8f7dc15941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.674 [INFO][5134] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.674 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" iface="eth0" netns="" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.674 [INFO][5134] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.674 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.776 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.777 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.777 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.796 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.796 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" HandleID="k8s-pod-network.6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--kube--controllers--57c6d949f5--dfcnc-eth0" Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.819 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:18:00.831377 containerd[1590]: 2025-02-13 20:18:00.824 [INFO][5134] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757" Feb 13 20:18:00.831377 containerd[1590]: time="2025-02-13T20:18:00.830212122Z" level=info msg="TearDown network for sandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" successfully" Feb 13 20:18:00.840006 containerd[1590]: time="2025-02-13T20:18:00.838467394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.840006 containerd[1590]: time="2025-02-13T20:18:00.838632143Z" level=info msg="RemovePodSandbox \"6a00ba3b52ebbae9b7c5640a04b4082751f0f9430329578e23dca67028553757\" returns successfully" Feb 13 20:18:00.841380 containerd[1590]: time="2025-02-13T20:18:00.841297785Z" level=info msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.032 [WARNING][5172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5345eab2-cc0b-40e4-a4c7-074faddca668", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453", Pod:"calico-apiserver-747d6676bd-nwxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69375d9ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.035 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.036 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" iface="eth0" netns="" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.036 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.037 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.196 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.196 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.197 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.233 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.238 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.250 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:18:01.278243 containerd[1590]: 2025-02-13 20:18:01.260 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.286452 containerd[1590]: time="2025-02-13T20:18:01.278194179Z" level=info msg="TearDown network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" successfully" Feb 13 20:18:01.286452 containerd[1590]: time="2025-02-13T20:18:01.285758099Z" level=info msg="StopPodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" returns successfully" Feb 13 20:18:01.298308 containerd[1590]: time="2025-02-13T20:18:01.297993164Z" level=info msg="RemovePodSandbox for \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" Feb 13 20:18:01.298308 containerd[1590]: time="2025-02-13T20:18:01.298058445Z" level=info msg="Forcibly stopping sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\"" Feb 13 20:18:01.309298 containerd[1590]: time="2025-02-13T20:18:01.309136521Z" level=info msg="StartContainer for \"1d9e63cca5460b59a526f2b37b640b4c4e508806e105cf316a937b042c28bb04\" returns successfully" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.548 [WARNING][5223] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0", GenerateName:"calico-apiserver-747d6676bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5345eab2-cc0b-40e4-a4c7-074faddca668", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6676bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-d-1eeb8951e4", ContainerID:"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453", Pod:"calico-apiserver-747d6676bd-nwxvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69375d9ca9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.549 [INFO][5223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.550 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" iface="eth0" netns="" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.550 [INFO][5223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.550 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.620 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.635 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.635 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.670 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.671 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" HandleID="k8s-pod-network.16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Workload="ci--4081.3.1--d--1eeb8951e4-k8s-calico--apiserver--747d6676bd--nwxvr-eth0" Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.684 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:18:01.745721 containerd[1590]: 2025-02-13 20:18:01.722 [INFO][5223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810" Feb 13 20:18:01.745721 containerd[1590]: time="2025-02-13T20:18:01.744872369Z" level=info msg="TearDown network for sandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" successfully" Feb 13 20:18:01.827574 containerd[1590]: time="2025-02-13T20:18:01.827249966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:01.827574 containerd[1590]: time="2025-02-13T20:18:01.827444132Z" level=info msg="RemovePodSandbox \"16beeb4fdfbdb31812687ecfb04bd34081a1f0d0689c3de0d144c7a9cd0c2810\" returns successfully" Feb 13 20:18:03.110946 kubelet[2780]: I0213 20:18:03.110653 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:18:03.238304 containerd[1590]: time="2025-02-13T20:18:03.236038761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:03.246323 containerd[1590]: time="2025-02-13T20:18:03.241487127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:18:03.246323 containerd[1590]: time="2025-02-13T20:18:03.245659915Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:03.272082 containerd[1590]: time="2025-02-13T20:18:03.271961088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:03.274884 containerd[1590]: time="2025-02-13T20:18:03.273989898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.559717455s" Feb 13 20:18:03.274884 containerd[1590]: time="2025-02-13T20:18:03.274066293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:18:03.279209 containerd[1590]: time="2025-02-13T20:18:03.278088065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:18:03.306375 containerd[1590]: time="2025-02-13T20:18:03.305541534Z" level=info msg="CreateContainer within sandbox \"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:18:03.415027 containerd[1590]: time="2025-02-13T20:18:03.413017888Z" level=info msg="CreateContainer within sandbox \"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1cf313dc9f70649832a625bcf8284db3581fb2f00891462025da6af6acc01530\"" Feb 13 20:18:03.415255 containerd[1590]: time="2025-02-13T20:18:03.415141088Z" level=info msg="StartContainer for \"1cf313dc9f70649832a625bcf8284db3581fb2f00891462025da6af6acc01530\"" Feb 13 20:18:03.659739 containerd[1590]: time="2025-02-13T20:18:03.659477854Z" level=info msg="StartContainer for \"1cf313dc9f70649832a625bcf8284db3581fb2f00891462025da6af6acc01530\" returns successfully" Feb 13 20:18:03.791741 systemd[1]: Started sshd@10-137.184.189.10:22-147.75.109.163:59632.service - OpenSSH per-connection server daemon (147.75.109.163:59632). Feb 13 20:18:04.175517 sshd[5280]: Accepted publickey for core from 147.75.109.163 port 59632 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:04.189803 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:04.240710 systemd-logind[1558]: New session 10 of user core. Feb 13 20:18:04.249852 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:18:04.528331 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:04.523428 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:04.523496 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:04.954333 sshd[5280]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:04.981628 systemd[1]: Started sshd@11-137.184.189.10:22-147.75.109.163:59638.service - OpenSSH per-connection server daemon (147.75.109.163:59638). Feb 13 20:18:04.993468 systemd[1]: sshd@10-137.184.189.10:22-147.75.109.163:59632.service: Deactivated successfully. Feb 13 20:18:05.014999 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:18:05.025846 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:18:05.037653 systemd-logind[1558]: Removed session 10. Feb 13 20:18:05.124248 sshd[5298]: Accepted publickey for core from 147.75.109.163 port 59638 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:05.141020 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:05.156012 systemd-logind[1558]: New session 11 of user core. Feb 13 20:18:05.164566 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:18:06.275177 sshd[5298]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:06.298485 systemd[1]: Started sshd@12-137.184.189.10:22-147.75.109.163:59652.service - OpenSSH per-connection server daemon (147.75.109.163:59652). Feb 13 20:18:06.362435 systemd[1]: sshd@11-137.184.189.10:22-147.75.109.163:59638.service: Deactivated successfully. Feb 13 20:18:06.377899 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:18:06.391962 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:18:06.412857 systemd-logind[1558]: Removed session 11. Feb 13 20:18:06.569090 sshd[5316]: Accepted publickey for core from 147.75.109.163 port 59652 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:06.577439 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:06.576850 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:06.576916 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:06.577712 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:06.611655 systemd-logind[1558]: New session 12 of user core. Feb 13 20:18:06.617357 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:18:06.705018 kubelet[2780]: E0213 20:18:06.701391 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:07.318898 sshd[5316]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:07.329692 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:18:07.330228 systemd[1]: sshd@12-137.184.189.10:22-147.75.109.163:59652.service: Deactivated successfully. Feb 13 20:18:07.334251 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:18:07.342573 systemd-logind[1558]: Removed session 12. Feb 13 20:18:08.295353 containerd[1590]: time="2025-02-13T20:18:08.295120080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:08.298402 containerd[1590]: time="2025-02-13T20:18:08.298316594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:18:08.299112 containerd[1590]: time="2025-02-13T20:18:08.299053658Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:08.304348 containerd[1590]: time="2025-02-13T20:18:08.303799076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:08.307113 containerd[1590]: time="2025-02-13T20:18:08.306950142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 5.028611277s" Feb 13 20:18:08.307113 containerd[1590]: time="2025-02-13T20:18:08.307022405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:18:08.317818 containerd[1590]: time="2025-02-13T20:18:08.313013312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:18:08.381626 containerd[1590]: time="2025-02-13T20:18:08.381198799Z" level=info msg="CreateContainer within sandbox \"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:18:08.434572 containerd[1590]: time="2025-02-13T20:18:08.434474669Z" level=info msg="CreateContainer within sandbox \"ee8ac6db9977067faaffb0f080f39489f41abd2017ed6a015067c47f98ac44cc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e858b9e156fa89c0a6b7123915f5ea04d5915d563782d1731e7e1156110afd0\"" Feb 13 20:18:08.440613 containerd[1590]: time="2025-02-13T20:18:08.437441321Z" level=info msg="StartContainer for \"7e858b9e156fa89c0a6b7123915f5ea04d5915d563782d1731e7e1156110afd0\"" Feb 13 20:18:08.609260 containerd[1590]: time="2025-02-13T20:18:08.609142919Z" level=info msg="StartContainer for \"7e858b9e156fa89c0a6b7123915f5ea04d5915d563782d1731e7e1156110afd0\" returns successfully" Feb 13 20:18:08.624980 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:08.623333 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:08.623375 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:08.778986 containerd[1590]: time="2025-02-13T20:18:08.778444446Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:08.780355 containerd[1590]: time="2025-02-13T20:18:08.779226307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:18:08.786388 containerd[1590]: time="2025-02-13T20:18:08.786309998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 470.438937ms" Feb 13 20:18:08.786388 containerd[1590]: time="2025-02-13T20:18:08.786386621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:18:08.795639 containerd[1590]: time="2025-02-13T20:18:08.795466182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:18:08.809710 containerd[1590]: time="2025-02-13T20:18:08.807795180Z" level=info msg="CreateContainer within sandbox \"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:18:08.839170 containerd[1590]: time="2025-02-13T20:18:08.838818112Z" level=info msg="CreateContainer within sandbox \"f782e775af1854d34729a5be51f3ca246aeb1817b19b25cd4c3626ded136d453\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2287a19a707e5e5ae506a3464a20914014627602f5c1814ea6760fbc7950c317\"" Feb 13 20:18:08.844954 containerd[1590]: time="2025-02-13T20:18:08.841937452Z" level=info msg="StartContainer for \"2287a19a707e5e5ae506a3464a20914014627602f5c1814ea6760fbc7950c317\"" Feb 13 20:18:09.010070 containerd[1590]: time="2025-02-13T20:18:09.008760991Z" level=info msg="StartContainer for \"2287a19a707e5e5ae506a3464a20914014627602f5c1814ea6760fbc7950c317\" returns successfully" Feb 13 20:18:09.223283 kubelet[2780]: I0213 20:18:09.222117 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-747d6676bd-qkf2k" podStartSLOduration=43.699438707 podStartE2EDuration="53.222079793s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="2025-02-13 20:17:51.190870795 +0000 UTC m=+56.049061395" lastFinishedPulling="2025-02-13 20:18:00.713511871 +0000 UTC m=+65.571702481" observedRunningTime="2025-02-13 20:18:02.153791037 +0000 UTC m=+67.011981648" watchObservedRunningTime="2025-02-13 20:18:09.222079793 +0000 UTC m=+74.080270399" Feb 13 20:18:09.265822 kubelet[2780]: I0213 20:18:09.263340 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-747d6676bd-nwxvr" podStartSLOduration=39.860587191 podStartE2EDuration="53.263308144s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="2025-02-13 20:17:55.390443848 +0000 UTC m=+60.248634431" lastFinishedPulling="2025-02-13 20:18:08.793164781 +0000 UTC m=+73.651355384" observedRunningTime="2025-02-13 20:18:09.224612059 +0000 UTC m=+74.082802657" watchObservedRunningTime="2025-02-13 20:18:09.263308144 +0000 UTC m=+74.121498764" Feb 13 20:18:09.265822 kubelet[2780]: I0213 20:18:09.265487 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57c6d949f5-dfcnc" podStartSLOduration=38.301309934 podStartE2EDuration="51.265464778s" podCreationTimestamp="2025-02-13 20:17:18 +0000 UTC" firstStartedPulling="2025-02-13 20:17:55.347018664 +0000 UTC m=+60.205209256" lastFinishedPulling="2025-02-13 20:18:08.311173504 +0000 UTC m=+73.169364100" observedRunningTime="2025-02-13 20:18:09.264975391 +0000 UTC m=+74.123166013" watchObservedRunningTime="2025-02-13 20:18:09.265464778 +0000 UTC m=+74.123655396" Feb 13 20:18:10.207160 kubelet[2780]: I0213 20:18:10.206132 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:18:10.857551 containerd[1590]: time="2025-02-13T20:18:10.857482734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:10.871995 containerd[1590]: time="2025-02-13T20:18:10.871734811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:18:10.873433 containerd[1590]: time="2025-02-13T20:18:10.873355227Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:10.878596 containerd[1590]: time="2025-02-13T20:18:10.878198767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:18:10.880264 containerd[1590]: time="2025-02-13T20:18:10.880154249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.084571502s" Feb 13 20:18:10.881993 containerd[1590]: time="2025-02-13T20:18:10.880715509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:18:10.891387 containerd[1590]: time="2025-02-13T20:18:10.891015091Z" level=info msg="CreateContainer within sandbox \"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:18:10.933770 containerd[1590]: time="2025-02-13T20:18:10.933612508Z" level=info msg="CreateContainer within sandbox \"b95efcb385094997014cec64232f24f91f831579ea4a7c10b6f43ee86e312bc2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fea334b79e2a5009df69dda8c95019673f66d70be7ca6154cc16d8421a97d9ab\"" Feb 13 20:18:10.935235 containerd[1590]: time="2025-02-13T20:18:10.935182197Z" level=info msg="StartContainer for \"fea334b79e2a5009df69dda8c95019673f66d70be7ca6154cc16d8421a97d9ab\"" Feb 13 20:18:11.086596 containerd[1590]: time="2025-02-13T20:18:11.086518207Z" level=info msg="StartContainer for \"fea334b79e2a5009df69dda8c95019673f66d70be7ca6154cc16d8421a97d9ab\" returns successfully" Feb 13 20:18:11.249970 kubelet[2780]: I0213 20:18:11.246688 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7s86p" podStartSLOduration=36.189851849 podStartE2EDuration="54.246490557s" podCreationTimestamp="2025-02-13 20:17:17 +0000 UTC" firstStartedPulling="2025-02-13 20:17:52.829794067 +0000 UTC m=+57.687984658" lastFinishedPulling="2025-02-13 20:18:10.886432783 +0000 UTC m=+75.744623366" observedRunningTime="2025-02-13 20:18:11.245909492 +0000 UTC m=+76.104100101" watchObservedRunningTime="2025-02-13 20:18:11.246490557 +0000 UTC m=+76.104681165" Feb 13 20:18:12.213720 kubelet[2780]: I0213 20:18:12.213629 2780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:18:12.220407 kubelet[2780]: I0213 20:18:12.220335 2780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:18:12.334655 systemd[1]: Started sshd@13-137.184.189.10:22-147.75.109.163:43830.service - OpenSSH per-connection server daemon (147.75.109.163:43830). Feb 13 20:18:12.500986 sshd[5470]: Accepted publickey for core from 147.75.109.163 port 43830 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:12.510354 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:12.527183 systemd-logind[1558]: New session 13 of user core. Feb 13 20:18:12.535152 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:18:13.464389 sshd[5470]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:13.482590 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:18:13.483608 systemd[1]: sshd@13-137.184.189.10:22-147.75.109.163:43830.service: Deactivated successfully. Feb 13 20:18:13.495743 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:18:13.499544 systemd-logind[1558]: Removed session 13. Feb 13 20:18:14.510652 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:14.508943 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:14.509001 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:16.701825 kubelet[2780]: E0213 20:18:16.701687 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:17.703322 kubelet[2780]: E0213 20:18:17.702745 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:18.478344 systemd[1]: Started sshd@14-137.184.189.10:22-147.75.109.163:43836.service - OpenSSH per-connection server daemon (147.75.109.163:43836). Feb 13 20:18:18.613455 sshd[5498]: Accepted publickey for core from 147.75.109.163 port 43836 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:18.624865 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:18.637914 systemd-logind[1558]: New session 14 of user core. Feb 13 20:18:18.643457 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:18:19.021453 sshd[5498]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:19.044794 systemd[1]: sshd@14-137.184.189.10:22-147.75.109.163:43836.service: Deactivated successfully. Feb 13 20:18:19.062369 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:18:19.067772 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:18:19.075572 systemd-logind[1558]: Removed session 14. Feb 13 20:18:19.855062 kubelet[2780]: I0213 20:18:19.854970 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:18:24.042044 systemd[1]: Started sshd@15-137.184.189.10:22-147.75.109.163:44358.service - OpenSSH per-connection server daemon (147.75.109.163:44358). Feb 13 20:18:24.175690 sshd[5514]: Accepted publickey for core from 147.75.109.163 port 44358 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:24.179525 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:24.213792 systemd-logind[1558]: New session 15 of user core. Feb 13 20:18:24.221863 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:18:24.642388 sshd[5514]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:24.651710 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:18:24.653598 systemd[1]: sshd@15-137.184.189.10:22-147.75.109.163:44358.service: Deactivated successfully. Feb 13 20:18:24.661670 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:18:24.664391 systemd-logind[1558]: Removed session 15. Feb 13 20:18:25.705022 kubelet[2780]: E0213 20:18:25.702683 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:26.049742 kubelet[2780]: I0213 20:18:26.046479 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:18:29.657934 systemd[1]: Started sshd@16-137.184.189.10:22-147.75.109.163:34082.service - OpenSSH per-connection server daemon (147.75.109.163:34082). Feb 13 20:18:29.832962 sshd[5557]: Accepted publickey for core from 147.75.109.163 port 34082 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:29.839000 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:29.863064 systemd-logind[1558]: New session 16 of user core. Feb 13 20:18:29.871535 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:18:30.285320 sshd[5557]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:30.299299 systemd[1]: Started sshd@17-137.184.189.10:22-147.75.109.163:34084.service - OpenSSH per-connection server daemon (147.75.109.163:34084). Feb 13 20:18:30.300620 systemd[1]: sshd@16-137.184.189.10:22-147.75.109.163:34082.service: Deactivated successfully. Feb 13 20:18:30.307231 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:18:30.316174 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:18:30.328505 systemd-logind[1558]: Removed session 16. Feb 13 20:18:30.398913 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 34084 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:30.402075 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:30.419906 systemd-logind[1558]: New session 17 of user core. Feb 13 20:18:30.423982 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:18:30.998121 sshd[5568]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:31.026506 systemd[1]: Started sshd@18-137.184.189.10:22-147.75.109.163:34096.service - OpenSSH per-connection server daemon (147.75.109.163:34096). Feb 13 20:18:31.039464 systemd[1]: sshd@17-137.184.189.10:22-147.75.109.163:34084.service: Deactivated successfully. Feb 13 20:18:31.054174 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:18:31.066651 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:18:31.078597 systemd-logind[1558]: Removed session 17. Feb 13 20:18:31.253451 sshd[5580]: Accepted publickey for core from 147.75.109.163 port 34096 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:31.264066 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:31.285923 systemd-logind[1558]: New session 18 of user core. Feb 13 20:18:31.294898 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:18:35.832250 sshd[5580]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:35.853707 systemd[1]: Started sshd@19-137.184.189.10:22-147.75.109.163:34108.service - OpenSSH per-connection server daemon (147.75.109.163:34108). Feb 13 20:18:35.861070 systemd[1]: sshd@18-137.184.189.10:22-147.75.109.163:34096.service: Deactivated successfully. Feb 13 20:18:35.883795 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:18:35.884997 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:18:35.913805 systemd-logind[1558]: Removed session 18. Feb 13 20:18:36.053614 sshd[5627]: Accepted publickey for core from 147.75.109.163 port 34108 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:36.063043 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:36.091193 systemd-logind[1558]: New session 19 of user core. Feb 13 20:18:36.105051 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:18:36.533196 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:36.531143 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:36.531158 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:37.872579 sshd[5627]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:37.896944 systemd[1]: Started sshd@20-137.184.189.10:22-147.75.109.163:34116.service - OpenSSH per-connection server daemon (147.75.109.163:34116). Feb 13 20:18:37.898212 systemd[1]: sshd@19-137.184.189.10:22-147.75.109.163:34108.service: Deactivated successfully. Feb 13 20:18:37.916970 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:18:37.922204 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:18:37.932674 systemd-logind[1558]: Removed session 19. Feb 13 20:18:38.048883 sshd[5639]: Accepted publickey for core from 147.75.109.163 port 34116 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:38.061932 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:38.076881 systemd-logind[1558]: New session 20 of user core. Feb 13 20:18:38.091738 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:18:38.574154 systemd-journald[1137]: Under memory pressure, flushing caches. Feb 13 20:18:38.571785 systemd-resolved[1484]: Under memory pressure, flushing caches. Feb 13 20:18:38.571796 systemd-resolved[1484]: Flushed all caches. Feb 13 20:18:38.594499 sshd[5639]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:38.604282 systemd[1]: sshd@20-137.184.189.10:22-147.75.109.163:34116.service: Deactivated successfully. Feb 13 20:18:38.616476 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:18:38.620147 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:18:38.627032 systemd-logind[1558]: Removed session 20. Feb 13 20:18:43.609529 systemd[1]: Started sshd@21-137.184.189.10:22-147.75.109.163:47496.service - OpenSSH per-connection server daemon (147.75.109.163:47496). Feb 13 20:18:43.714026 sshd[5661]: Accepted publickey for core from 147.75.109.163 port 47496 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:43.723800 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:43.747001 systemd-logind[1558]: New session 21 of user core. Feb 13 20:18:43.752169 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:18:44.125415 sshd[5661]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:44.146069 systemd[1]: sshd@21-137.184.189.10:22-147.75.109.163:47496.service: Deactivated successfully. Feb 13 20:18:44.160048 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:18:44.177273 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:18:44.187495 systemd-logind[1558]: Removed session 21. Feb 13 20:18:49.147531 systemd[1]: Started sshd@22-137.184.189.10:22-147.75.109.163:47498.service - OpenSSH per-connection server daemon (147.75.109.163:47498). Feb 13 20:18:49.209517 sshd[5676]: Accepted publickey for core from 147.75.109.163 port 47498 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:49.213377 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:49.226004 systemd-logind[1558]: New session 22 of user core. Feb 13 20:18:49.231396 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:18:49.426670 sshd[5676]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:49.435490 systemd[1]: sshd@22-137.184.189.10:22-147.75.109.163:47498.service: Deactivated successfully. Feb 13 20:18:49.444235 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:18:49.445253 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:18:49.452710 systemd-logind[1558]: Removed session 22. Feb 13 20:18:54.457243 systemd[1]: Started sshd@23-137.184.189.10:22-147.75.109.163:43552.service - OpenSSH per-connection server daemon (147.75.109.163:43552). Feb 13 20:18:54.628019 sshd[5691]: Accepted publickey for core from 147.75.109.163 port 43552 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:54.633659 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:54.645815 systemd-logind[1558]: New session 23 of user core. Feb 13 20:18:54.653554 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:18:54.839293 sshd[5691]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:54.845241 systemd[1]: sshd@23-137.184.189.10:22-147.75.109.163:43552.service: Deactivated successfully. Feb 13 20:18:54.853236 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:18:54.856183 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:18:54.860602 systemd-logind[1558]: Removed session 23.