Feb 13 20:14:05.960986 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:14:05.961013 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:05.961025 kernel: BIOS-provided physical RAM map: Feb 13 20:14:05.961032 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:14:05.961038 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:14:05.961045 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:14:05.961053 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:14:05.961060 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:14:05.961066 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:14:05.961076 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:14:05.961082 kernel: NX (Execute Disable) protection: active Feb 13 20:14:05.961089 kernel: APIC: Static calls initialized Feb 13 20:14:05.961102 kernel: SMBIOS 2.8 present. Feb 13 20:14:05.961113 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:14:05.961127 kernel: Hypervisor detected: KVM Feb 13 20:14:05.961141 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:14:05.961152 kernel: kvm-clock: using sched offset of 3276743365 cycles Feb 13 20:14:05.961161 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:14:05.961169 kernel: tsc: Detected 2494.134 MHz processor Feb 13 20:14:05.961177 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:14:05.961185 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:14:05.961193 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:14:05.961201 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:14:05.961209 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:14:05.961225 kernel: ACPI: Early table checksum verification disabled Feb 13 20:14:05.961258 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:14:05.961267 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961274 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961282 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961290 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:14:05.961298 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961305 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961318 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961335 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:05.961364 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:14:05.961372 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:14:05.961379 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:14:05.961387 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:14:05.961395 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:14:05.961402 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:14:05.961417 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:14:05.961425 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:14:05.961433 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:14:05.961453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:14:05.961461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:14:05.961473 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:14:05.961481 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:14:05.961495 kernel: Zone ranges: Feb 13 20:14:05.961508 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:14:05.961520 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:14:05.961528 kernel: Normal empty Feb 13 20:14:05.961536 kernel: Movable zone start for each node Feb 13 20:14:05.961544 kernel: Early memory node ranges Feb 13 20:14:05.961558 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:14:05.961572 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:14:05.961581 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:14:05.961593 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:14:05.961611 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:14:05.961621 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:14:05.961629 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:14:05.961638 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:14:05.961652 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:14:05.961663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:14:05.961677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:14:05.961686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:14:05.961698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:14:05.961706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:14:05.961714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:14:05.961722 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:14:05.961730 kernel: TSC deadline timer available Feb 13 20:14:05.961738 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:14:05.961747 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:14:05.961755 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:14:05.961765 kernel: Booting paravirtualized kernel on KVM Feb 13 20:14:05.961775 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:14:05.961792 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:14:05.961804 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:14:05.961813 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:14:05.961821 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:14:05.961828 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:14:05.961838 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:05.961847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:14:05.961855 kernel: random: crng init done Feb 13 20:14:05.961866 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:14:05.961875 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:14:05.961890 kernel: Fallback order for Node 0: 0 Feb 13 20:14:05.961900 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:14:05.961908 kernel: Policy zone: DMA32 Feb 13 20:14:05.961916 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:14:05.961925 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:14:05.961933 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:14:05.961949 kernel: Kernel/User page tables isolation: enabled Feb 13 20:14:05.961972 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:14:05.961993 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:14:05.962011 kernel: Dynamic Preempt: voluntary Feb 13 20:14:05.962021 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:14:05.962037 kernel: rcu: RCU event tracing is enabled. Feb 13 20:14:05.962045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:14:05.962055 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:14:05.962070 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:14:05.962080 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:14:05.962093 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:14:05.962101 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:14:05.962111 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:14:05.962125 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:14:05.962140 kernel: Console: colour VGA+ 80x25 Feb 13 20:14:05.962149 kernel: printk: console [tty0] enabled Feb 13 20:14:05.962170 kernel: printk: console [ttyS0] enabled Feb 13 20:14:05.962183 kernel: ACPI: Core revision 20230628 Feb 13 20:14:05.962195 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:14:05.962212 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:14:05.962226 kernel: x2apic enabled Feb 13 20:14:05.962253 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:14:05.962264 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:14:05.962278 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Feb 13 20:14:05.962290 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Feb 13 20:14:05.962298 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:14:05.962307 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:14:05.962328 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:14:05.962337 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:14:05.962346 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:14:05.962357 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:14:05.962369 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:14:05.962383 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:14:05.962392 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:14:05.962407 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:14:05.962422 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:14:05.962438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:14:05.962447 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:14:05.962456 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:14:05.962465 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:14:05.962474 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:14:05.962483 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:14:05.962492 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:14:05.962508 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:14:05.962522 kernel: landlock: Up and running. Feb 13 20:14:05.962530 kernel: SELinux: Initializing. Feb 13 20:14:05.962539 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:14:05.962548 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:14:05.962557 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:14:05.962565 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:05.962574 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:05.962583 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:05.962592 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:14:05.962725 kernel: signal: max sigframe size: 1776 Feb 13 20:14:05.962744 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:14:05.962754 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:14:05.962763 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:14:05.962771 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:14:05.962786 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:14:05.962800 kernel: .... node #0, CPUs: #1 Feb 13 20:14:05.962810 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:14:05.962822 kernel: smpboot: Max logical packages: 1 Feb 13 20:14:05.962837 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Feb 13 20:14:05.962845 kernel: devtmpfs: initialized Feb 13 20:14:05.962854 kernel: x86/mm: Memory block size: 128MB Feb 13 20:14:05.962863 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:14:05.962872 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:14:05.962885 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:14:05.962899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:14:05.962908 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:14:05.962916 kernel: audit: type=2000 audit(1739477644.746:1): state=initialized audit_enabled=0 res=1 Feb 13 20:14:05.962928 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:14:05.962937 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:14:05.962953 kernel: cpuidle: using governor menu Feb 13 20:14:05.962978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:14:05.962988 kernel: dca service started, version 1.12.1 Feb 13 20:14:05.962996 kernel: PCI: Using configuration type 1 for base access Feb 13 20:14:05.963005 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:14:05.963014 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:14:05.963023 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:14:05.963035 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:14:05.963044 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:14:05.963055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:14:05.963070 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:14:05.963080 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:14:05.963088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:14:05.963098 kernel: ACPI: Interpreter enabled Feb 13 20:14:05.963113 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:14:05.963125 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:14:05.963137 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:14:05.963146 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:14:05.963155 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:14:05.963163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:14:05.966831 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:14:05.966978 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:14:05.967078 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:14:05.967096 kernel: acpiphp: Slot [3] registered Feb 13 20:14:05.967105 kernel: acpiphp: Slot [4] registered Feb 13 20:14:05.967114 kernel: acpiphp: Slot [5] registered Feb 13 20:14:05.967123 kernel: acpiphp: Slot [6] registered Feb 13 20:14:05.967132 kernel: acpiphp: Slot [7] registered Feb 13 20:14:05.967140 kernel: acpiphp: Slot [8] registered Feb 13 20:14:05.967149 kernel: acpiphp: Slot [9] registered Feb 13 20:14:05.967158 kernel: acpiphp: Slot [10] registered Feb 13 20:14:05.967167 kernel: acpiphp: Slot [11] registered Feb 13 20:14:05.967178 kernel: acpiphp: Slot [12] registered Feb 13 20:14:05.967187 kernel: acpiphp: Slot [13] registered Feb 13 20:14:05.967195 kernel: acpiphp: Slot [14] registered Feb 13 20:14:05.967204 kernel: acpiphp: Slot [15] registered Feb 13 20:14:05.967213 kernel: acpiphp: Slot [16] registered Feb 13 20:14:05.967221 kernel: acpiphp: Slot [17] registered Feb 13 20:14:05.967230 kernel: acpiphp: Slot [18] registered Feb 13 20:14:05.967306 kernel: acpiphp: Slot [19] registered Feb 13 20:14:05.967315 kernel: acpiphp: Slot [20] registered Feb 13 20:14:05.967324 kernel: acpiphp: Slot [21] registered Feb 13 20:14:05.967350 kernel: acpiphp: Slot [22] registered Feb 13 20:14:05.967359 kernel: acpiphp: Slot [23] registered Feb 13 20:14:05.967368 kernel: acpiphp: Slot [24] registered Feb 13 20:14:05.967377 kernel: acpiphp: Slot [25] registered Feb 13 20:14:05.967385 kernel: acpiphp: Slot [26] registered Feb 13 20:14:05.967394 kernel: acpiphp: Slot [27] registered Feb 13 20:14:05.967403 kernel: acpiphp: Slot [28] registered Feb 13 20:14:05.967412 kernel: acpiphp: Slot [29] registered Feb 13 20:14:05.967432 kernel: acpiphp: Slot [30] registered Feb 13 20:14:05.967449 kernel: acpiphp: Slot [31] registered Feb 13 20:14:05.967463 kernel: PCI host bridge to bus 0000:00 Feb 13 20:14:05.967626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:14:05.967727 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:14:05.967827 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:14:05.967919 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:14:05.968005 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:14:05.968102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:14:05.969361 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:14:05.969550 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:14:05.969703 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:14:05.969811 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:14:05.969928 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:14:05.970039 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:14:05.970144 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:14:05.970317 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:14:05.970432 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:14:05.970529 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:14:05.970646 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:14:05.970759 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:14:05.970890 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:14:05.971036 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:14:05.971153 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:14:05.973729 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:14:05.973902 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:14:05.974086 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:14:05.975363 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:14:05.975662 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:14:05.975891 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:14:05.976112 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:14:05.976230 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:14:05.978441 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:14:05.978563 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:14:05.978708 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:14:05.978827 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:14:05.978954 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:14:05.979068 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:14:05.979169 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:14:05.979297 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:14:05.979430 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:14:05.979528 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:14:05.979649 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:14:05.979772 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:14:05.979882 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:14:05.979978 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:14:05.980072 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:14:05.980179 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:14:05.980322 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:14:05.980448 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:14:05.980545 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:14:05.980557 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:14:05.980566 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:14:05.980575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:14:05.980584 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:14:05.980598 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:14:05.980607 kernel: iommu: Default domain type: Translated Feb 13 20:14:05.980618 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:14:05.980633 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:14:05.980647 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:14:05.980656 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:14:05.980664 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:14:05.980781 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:14:05.980883 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:14:05.980986 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:14:05.980998 kernel: vgaarb: loaded Feb 13 20:14:05.981008 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:14:05.981017 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:14:05.981025 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:14:05.981034 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:14:05.981043 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:14:05.981052 kernel: pnp: PnP ACPI init Feb 13 20:14:05.981061 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:14:05.981080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:14:05.981111 kernel: NET: Registered PF_INET protocol family Feb 13 20:14:05.981127 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:14:05.981136 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:14:05.981145 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:14:05.981154 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:14:05.981163 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:14:05.981172 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:14:05.981180 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:14:05.981194 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:14:05.981204 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:14:05.981219 kernel: NET: Registered PF_XDP protocol family Feb 13 20:14:05.983434 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:14:05.983536 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:14:05.983624 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:14:05.983758 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:14:05.983893 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:14:05.984032 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:14:05.984160 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:14:05.984175 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:14:05.986381 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 29329 usecs Feb 13 20:14:05.986407 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:14:05.986417 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:14:05.986427 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Feb 13 20:14:05.986436 kernel: Initialise system trusted keyrings Feb 13 20:14:05.986455 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:14:05.986464 kernel: Key type asymmetric registered Feb 13 20:14:05.986473 kernel: Asymmetric key parser 'x509' registered Feb 13 20:14:05.986482 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:14:05.986491 kernel: io scheduler mq-deadline registered Feb 13 20:14:05.986500 kernel: io scheduler kyber registered Feb 13 20:14:05.986509 kernel: io scheduler bfq registered Feb 13 20:14:05.986517 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:14:05.986527 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:14:05.986536 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:14:05.986548 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:14:05.986556 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:14:05.986569 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:14:05.986584 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:14:05.986597 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:14:05.986610 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:14:05.986624 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:14:05.986800 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:14:05.986925 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:14:05.987020 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:14:05 UTC (1739477645) Feb 13 20:14:05.987108 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:14:05.987120 kernel: intel_pstate: CPU model not supported Feb 13 20:14:05.987129 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:14:05.987138 kernel: Segment Routing with IPv6 Feb 13 20:14:05.987147 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:14:05.987156 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:14:05.987170 kernel: Key type dns_resolver registered Feb 13 20:14:05.987179 kernel: IPI shorthand broadcast: enabled Feb 13 20:14:05.987188 kernel: sched_clock: Marking stable (915003837, 86210084)->(1015047169, -13833248) Feb 13 20:14:05.987197 kernel: registered taskstats version 1 Feb 13 20:14:05.987206 kernel: Loading compiled-in X.509 certificates Feb 13 20:14:05.987215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:14:05.987223 kernel: Key type .fscrypt registered Feb 13 20:14:05.987548 kernel: Key type fscrypt-provisioning registered Feb 13 20:14:05.987567 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:14:05.987582 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:14:05.987591 kernel: ima: No architecture policies found Feb 13 20:14:05.987600 kernel: clk: Disabling unused clocks Feb 13 20:14:05.987609 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:14:05.987619 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:14:05.987656 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:14:05.987674 kernel: Run /init as init process Feb 13 20:14:05.987689 kernel: with arguments: Feb 13 20:14:05.987705 kernel: /init Feb 13 20:14:05.987723 kernel: with environment: Feb 13 20:14:05.987737 kernel: HOME=/ Feb 13 20:14:05.987751 kernel: TERM=linux Feb 13 20:14:05.987766 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:14:05.987786 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:14:05.987799 systemd[1]: Detected virtualization kvm. Feb 13 20:14:05.987809 systemd[1]: Detected architecture x86-64. Feb 13 20:14:05.987818 systemd[1]: Running in initrd. Feb 13 20:14:05.987831 systemd[1]: No hostname configured, using default hostname. Feb 13 20:14:05.987843 systemd[1]: Hostname set to . Feb 13 20:14:05.987859 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:14:05.987875 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:14:05.987885 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:05.987895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:05.987906 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:14:05.987915 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:14:05.987928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:14:05.987939 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:14:05.987959 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:14:05.987970 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:14:05.987979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:05.987989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:05.988002 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:14:05.988012 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:14:05.988022 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:14:05.988035 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:14:05.988045 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:14:05.988059 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:14:05.988080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:14:05.988096 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:14:05.988106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:05.988116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:05.988125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:05.988135 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:14:05.988145 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:14:05.988157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:14:05.988199 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:14:05.988213 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:14:05.988230 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:14:05.989272 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:14:05.989285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:05.989295 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:14:05.989305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:05.989315 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:14:05.989375 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:14:05.989403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:14:05.989414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:14:05.989426 systemd-journald[183]: Journal started Feb 13 20:14:05.989448 systemd-journald[183]: Runtime Journal (/run/log/journal/742f4d560a6441a69d4b8aad3a0a6a35) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:14:05.963763 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:14:06.012207 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:14:06.012250 kernel: Bridge firewalling registered Feb 13 20:14:06.012272 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:14:05.998892 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:14:06.016984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:06.017810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:06.026466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:06.028403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:14:06.031775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:14:06.043087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:14:06.048627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:06.055052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:06.056329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:06.059436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:14:06.063535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:06.076566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:14:06.099213 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:14:06.106074 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:06.111529 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:14:06.111546 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:14:06.111581 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:14:06.114720 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:14:06.116147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:14:06.118979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:06.209289 kernel: SCSI subsystem initialized Feb 13 20:14:06.219267 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:14:06.230266 kernel: iscsi: registered transport (tcp) Feb 13 20:14:06.253546 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:14:06.253636 kernel: QLogic iSCSI HBA Driver Feb 13 20:14:06.307912 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:14:06.314522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:14:06.344446 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:14:06.344519 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:14:06.344532 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:14:06.389274 kernel: raid6: avx2x4 gen() 17411 MB/s Feb 13 20:14:06.406292 kernel: raid6: avx2x2 gen() 17479 MB/s Feb 13 20:14:06.423486 kernel: raid6: avx2x1 gen() 12950 MB/s Feb 13 20:14:06.423566 kernel: raid6: using algorithm avx2x2 gen() 17479 MB/s Feb 13 20:14:06.441638 kernel: raid6: .... xor() 19477 MB/s, rmw enabled Feb 13 20:14:06.441745 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:14:06.463276 kernel: xor: automatically using best checksumming function avx Feb 13 20:14:06.651275 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:14:06.665392 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:14:06.670486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:06.699000 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 20:14:06.704471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:06.712486 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:14:06.732132 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 20:14:06.768279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:14:06.774484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:14:06.836734 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:06.842466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:14:06.869369 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:14:06.871248 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:14:06.873012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:06.874363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:14:06.880534 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:14:06.904507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:14:06.921268 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:14:06.983172 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:14:06.983421 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:14:06.983453 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:14:06.983671 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:14:06.983690 kernel: GPT:9289727 != 125829119 Feb 13 20:14:06.983706 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:14:06.983723 kernel: GPT:9289727 != 125829119 Feb 13 20:14:06.983739 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:14:06.983757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:06.983774 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:14:07.016423 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:14:07.016458 kernel: AES CTR mode by8 optimization enabled Feb 13 20:14:07.016471 kernel: ACPI: bus type USB registered Feb 13 20:14:07.016483 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Feb 13 20:14:07.016667 kernel: usbcore: registered new interface driver usbfs Feb 13 20:14:07.016681 kernel: usbcore: registered new interface driver hub Feb 13 20:14:06.976607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:14:06.976764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:06.977817 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:06.978710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:06.978914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:06.979662 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:07.069923 kernel: usbcore: registered new device driver usb Feb 13 20:14:07.069961 kernel: libata version 3.00 loaded. Feb 13 20:14:07.069975 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:14:07.075872 kernel: scsi host1: ata_piix Feb 13 20:14:07.076026 kernel: scsi host2: ata_piix Feb 13 20:14:07.076178 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:14:07.076197 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:14:06.989841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:07.074573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:07.092643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:07.107778 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (463) Feb 13 20:14:07.113271 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Feb 13 20:14:07.118346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:14:07.131045 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:14:07.131555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:14:07.133050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:07.140081 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:14:07.146542 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:14:07.152952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:14:07.158388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:07.160362 disk-uuid[550]: Primary Header is updated. Feb 13 20:14:07.160362 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:14:07.160362 disk-uuid[550]: Secondary Header is updated. Feb 13 20:14:07.270221 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:14:07.277123 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:14:07.277627 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:14:07.278057 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:14:07.278376 kernel: hub 1-0:1.0: USB hub found Feb 13 20:14:07.278522 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:14:08.178948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:08.179021 disk-uuid[551]: The operation has completed successfully. Feb 13 20:14:08.218827 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:14:08.219445 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:14:08.238544 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:14:08.245006 sh[564]: Success Feb 13 20:14:08.259278 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:14:08.322449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:14:08.334483 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:14:08.341499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:14:08.355283 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:14:08.355367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:08.355382 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:14:08.356382 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:14:08.356441 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:14:08.363509 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:14:08.364603 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:14:08.371460 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:14:08.373457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:14:08.385168 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:08.385227 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:08.385254 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:08.388269 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:08.399876 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:14:08.400523 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:08.404981 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:14:08.416210 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:14:08.516825 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:14:08.531543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:14:08.566116 ignition[648]: Ignition 2.19.0 Feb 13 20:14:08.566128 ignition[648]: Stage: fetch-offline Feb 13 20:14:08.566163 ignition[648]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:08.567291 systemd-networkd[749]: lo: Link UP Feb 13 20:14:08.566173 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:08.567298 systemd-networkd[749]: lo: Gained carrier Feb 13 20:14:08.566889 ignition[648]: parsed url from cmdline: "" Feb 13 20:14:08.569642 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:14:08.566895 ignition[648]: no config URL provided Feb 13 20:14:08.569786 systemd-networkd[749]: Enumeration completed Feb 13 20:14:08.566901 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:14:08.570369 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:14:08.566914 ignition[648]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:14:08.570373 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:14:08.566921 ignition[648]: failed to fetch config: resource requires networking Feb 13 20:14:08.571177 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:14:08.567751 ignition[648]: Ignition finished successfully Feb 13 20:14:08.571181 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:14:08.571541 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:14:08.571827 systemd-networkd[749]: eth0: Link UP Feb 13 20:14:08.571833 systemd-networkd[749]: eth0: Gained carrier Feb 13 20:14:08.571841 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:14:08.574753 systemd[1]: Reached target network.target - Network. Feb 13 20:14:08.575794 systemd-networkd[749]: eth1: Link UP Feb 13 20:14:08.575800 systemd-networkd[749]: eth1: Gained carrier Feb 13 20:14:08.575817 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:14:08.584712 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:14:08.590329 systemd-networkd[749]: eth0: DHCPv4 address 146.190.40.231/19, gateway 146.190.32.1 acquired from 169.254.169.253 Feb 13 20:14:08.596374 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Feb 13 20:14:08.610578 ignition[757]: Ignition 2.19.0 Feb 13 20:14:08.610590 ignition[757]: Stage: fetch Feb 13 20:14:08.610780 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:08.610792 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:08.610914 ignition[757]: parsed url from cmdline: "" Feb 13 20:14:08.610919 ignition[757]: no config URL provided Feb 13 20:14:08.610925 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:14:08.610933 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:14:08.610953 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:14:08.636952 ignition[757]: GET result: OK Feb 13 20:14:08.637303 ignition[757]: parsing config with SHA512: 27ba9f91d0b0269431aed4cf25ee35a764bcffe2427127a892b75aca6901f978b0fbca760d6bcdb113c99c317609d22f7e9ba84e7159154ef3c47bce2768fe37 Feb 13 20:14:08.642265 unknown[757]: fetched base config from "system" Feb 13 20:14:08.642276 unknown[757]: fetched base config from "system" Feb 13 20:14:08.642634 ignition[757]: fetch: fetch complete Feb 13 20:14:08.642283 unknown[757]: fetched user config from "digitalocean" Feb 13 20:14:08.642639 ignition[757]: fetch: fetch passed Feb 13 20:14:08.644208 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:14:08.642686 ignition[757]: Ignition finished successfully Feb 13 20:14:08.655493 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:14:08.677319 ignition[765]: Ignition 2.19.0 Feb 13 20:14:08.678310 ignition[765]: Stage: kargs Feb 13 20:14:08.678544 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:08.678555 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:08.680721 ignition[765]: kargs: kargs passed Feb 13 20:14:08.681117 ignition[765]: Ignition finished successfully Feb 13 20:14:08.682918 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:14:08.688516 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:14:08.709732 ignition[771]: Ignition 2.19.0 Feb 13 20:14:08.709741 ignition[771]: Stage: disks Feb 13 20:14:08.709937 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:08.709951 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:08.711584 ignition[771]: disks: disks passed Feb 13 20:14:08.712979 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:14:08.711642 ignition[771]: Ignition finished successfully Feb 13 20:14:08.717602 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:14:08.718077 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:14:08.718886 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:14:08.719712 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:14:08.720529 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:14:08.727472 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:14:08.744566 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:14:08.747607 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:14:08.753430 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:14:08.851349 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:14:08.852044 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:14:08.853419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:14:08.870450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:14:08.873380 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:14:08.874955 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:14:08.879490 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:14:08.889807 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Feb 13 20:14:08.889845 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:08.889864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:08.889883 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:08.883433 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:14:08.883487 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:14:08.893291 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:08.897445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:14:08.903000 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:14:08.910594 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:14:08.980280 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:14:08.983516 coreos-metadata[790]: Feb 13 20:14:08.978 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:08.988656 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:14:08.989952 coreos-metadata[789]: Feb 13 20:14:08.988 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:08.992891 coreos-metadata[790]: Feb 13 20:14:08.992 INFO Fetch successful Feb 13 20:14:08.993383 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:14:08.998403 coreos-metadata[790]: Feb 13 20:14:08.997 INFO wrote hostname ci-4081.3.1-9-4d1da4e47c to /sysroot/etc/hostname Feb 13 20:14:09.001131 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:14:09.000518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:14:09.002415 coreos-metadata[789]: Feb 13 20:14:09.001 INFO Fetch successful Feb 13 20:14:09.008035 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:14:09.008580 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:14:09.104585 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:14:09.109417 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:14:09.111444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:14:09.124262 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:09.139832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:14:09.155138 ignition[910]: INFO : Ignition 2.19.0 Feb 13 20:14:09.155138 ignition[910]: INFO : Stage: mount Feb 13 20:14:09.156483 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:09.156483 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:09.156483 ignition[910]: INFO : mount: mount passed Feb 13 20:14:09.156483 ignition[910]: INFO : Ignition finished successfully Feb 13 20:14:09.157882 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:14:09.167513 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:14:09.353728 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:14:09.366527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:14:09.375266 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Feb 13 20:14:09.377275 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:09.377335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:09.378548 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:09.381269 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:09.383274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:14:09.409927 ignition[938]: INFO : Ignition 2.19.0 Feb 13 20:14:09.410710 ignition[938]: INFO : Stage: files Feb 13 20:14:09.410710 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:09.410710 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:09.411819 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:14:09.412774 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:14:09.412774 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:14:09.415734 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:14:09.416282 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:14:09.416282 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:14:09.416179 unknown[938]: wrote ssh authorized keys file for user: core Feb 13 20:14:09.417821 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:14:09.417821 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 20:14:09.452185 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:14:09.662421 systemd-networkd[749]: eth0: Gained IPv6LL Feb 13 20:14:09.676552 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:14:09.677363 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:14:09.681681 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:14:10.135143 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:14:10.368014 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:14:10.368014 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:14:10.370768 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:14:10.370768 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:14:10.370768 ignition[938]: INFO : files: files passed Feb 13 20:14:10.370768 ignition[938]: INFO : Ignition finished successfully Feb 13 20:14:10.372006 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:14:10.383857 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:14:10.387554 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:14:10.392513 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:14:10.392663 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:14:10.411469 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:10.411469 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:10.413967 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:10.416065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:14:10.417315 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:14:10.423541 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:14:10.464534 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:14:10.464685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:14:10.466468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:14:10.467498 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:14:10.468054 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:14:10.477530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:14:10.494382 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:14:10.503533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:14:10.514805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:10.515372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:10.516558 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:14:10.517576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:14:10.517772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:14:10.518883 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:14:10.519867 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:14:10.520543 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:14:10.521319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:14:10.522111 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:14:10.522949 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:14:10.523718 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:14:10.524605 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:14:10.525425 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:14:10.526354 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:14:10.527005 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:14:10.527173 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:14:10.528031 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:10.528874 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:10.529574 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:14:10.529682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:10.530460 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:14:10.530617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:14:10.531827 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:14:10.532058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:14:10.532805 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:14:10.532900 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:14:10.533610 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:14:10.533714 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:14:10.547626 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:14:10.548854 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:14:10.549047 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:10.553508 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:14:10.553895 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:14:10.554057 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:10.554676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:14:10.554788 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:14:10.559729 systemd-networkd[749]: eth1: Gained IPv6LL Feb 13 20:14:10.564398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:14:10.564997 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:14:10.572324 ignition[990]: INFO : Ignition 2.19.0 Feb 13 20:14:10.572324 ignition[990]: INFO : Stage: umount Feb 13 20:14:10.575008 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:10.575008 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:10.575008 ignition[990]: INFO : umount: umount passed Feb 13 20:14:10.575008 ignition[990]: INFO : Ignition finished successfully Feb 13 20:14:10.578076 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:14:10.578255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:14:10.581676 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:14:10.581763 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:14:10.586573 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:14:10.586672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:14:10.587069 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:14:10.587110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:14:10.589384 systemd[1]: Stopped target network.target - Network. Feb 13 20:14:10.589831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:14:10.589893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:14:10.590378 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:14:10.599290 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:14:10.603663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:10.611806 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:14:10.612572 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:14:10.613383 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:14:10.613795 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:14:10.614667 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:14:10.615082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:14:10.615814 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:14:10.615873 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:14:10.616593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:14:10.616637 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:14:10.617338 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:14:10.618192 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:14:10.620314 systemd-networkd[749]: eth0: DHCPv6 lease lost Feb 13 20:14:10.620576 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:14:10.621185 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:14:10.621351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:14:10.623554 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:14:10.623680 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:14:10.625201 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:14:10.625414 systemd-networkd[749]: eth1: DHCPv6 lease lost Feb 13 20:14:10.626723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:14:10.629134 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:14:10.629812 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:14:10.631924 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:14:10.631988 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:10.636392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:14:10.636801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:14:10.636868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:14:10.637555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:14:10.637616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:10.638423 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:14:10.638471 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:10.639231 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:14:10.639338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:10.640093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:10.659771 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:14:10.659990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:10.661853 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:14:10.661938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:10.662937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:14:10.662979 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:10.663762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:14:10.663816 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:14:10.668008 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:14:10.668082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:14:10.668923 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:14:10.668983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:10.682594 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:14:10.685559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:14:10.685673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:10.687349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:10.687436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:10.688941 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:14:10.689673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:14:10.690871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:14:10.690977 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:14:10.693793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:14:10.700503 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:14:10.711139 systemd[1]: Switching root. Feb 13 20:14:10.755991 systemd-journald[183]: Journal stopped Feb 13 20:14:11.977558 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:14:11.977668 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:14:11.977688 kernel: SELinux: policy capability open_perms=1 Feb 13 20:14:11.977705 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:14:11.977722 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:14:11.977734 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:14:11.977746 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:14:11.977758 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:14:11.977769 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:14:11.977785 kernel: audit: type=1403 audit(1739477650.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:14:11.977800 systemd[1]: Successfully loaded SELinux policy in 39.046ms. Feb 13 20:14:11.977825 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.198ms. Feb 13 20:14:11.977840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:14:11.977853 systemd[1]: Detected virtualization kvm. Feb 13 20:14:11.977866 systemd[1]: Detected architecture x86-64. Feb 13 20:14:11.977880 systemd[1]: Detected first boot. Feb 13 20:14:11.977900 systemd[1]: Hostname set to . Feb 13 20:14:11.977919 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:14:11.977956 zram_generator::config[1032]: No configuration found. Feb 13 20:14:11.977980 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:14:11.977996 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:14:11.978009 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:14:11.978022 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:14:11.978036 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:14:11.978049 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:14:11.978062 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:14:11.978075 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:14:11.978087 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:14:11.978104 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:14:11.978117 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:14:11.978129 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:14:11.978142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:11.978158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:11.978171 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:14:11.978184 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:14:11.978196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:14:11.978212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:14:11.978226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:14:11.985281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:11.985318 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:14:11.985333 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:14:11.985346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:14:11.985359 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:14:11.985382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:11.985394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:14:11.985408 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:14:11.985420 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:14:11.985433 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:14:11.985446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:14:11.985459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:11.985471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:11.985484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:11.985499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:14:11.985512 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:14:11.985525 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:14:11.985537 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:14:11.985550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:11.985563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:14:11.985575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:14:11.985588 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:14:11.985602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:14:11.985618 systemd[1]: Reached target machines.target - Containers. Feb 13 20:14:11.985630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:14:11.985643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:11.985655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:14:11.985668 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:14:11.985680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:11.985693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:14:11.985705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:11.985720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:14:11.985733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:11.985747 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:14:11.985761 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:14:11.985774 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:14:11.985786 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:14:11.985799 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:14:11.985812 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:14:11.985824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:14:11.985840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:14:11.985853 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:14:11.985866 kernel: loop: module loaded Feb 13 20:14:11.985879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:14:11.985891 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:14:11.985904 systemd[1]: Stopped verity-setup.service. Feb 13 20:14:11.985916 kernel: fuse: init (API version 7.39) Feb 13 20:14:11.985942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:11.985961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:14:11.985984 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:14:11.986002 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:14:11.986016 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:14:11.986028 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:14:11.986044 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:14:11.986057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:11.986069 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:14:11.986082 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:14:11.986095 kernel: ACPI: bus type drm_connector registered Feb 13 20:14:11.986112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:11.986132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:11.986145 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:14:11.986158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:14:11.986170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:11.986183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:11.986196 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:14:11.986208 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:14:11.986221 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:11.989334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:11.989374 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:11.989392 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:14:11.989406 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:14:11.989418 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:14:11.989432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:14:11.989445 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:14:11.989464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:14:11.989481 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:14:11.989502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:14:11.989515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:14:11.989566 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 20:14:11.989598 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:14:11.989617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:11.989630 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:14:11.989643 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:11.989658 systemd-journald[1108]: Journal started Feb 13 20:14:11.989686 systemd-journald[1108]: Runtime Journal (/run/log/journal/742f4d560a6441a69d4b8aad3a0a6a35) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:14:11.994964 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:14:11.995030 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:11.581714 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:14:11.602472 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:14:11.603189 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:14:12.019345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:14:12.019415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:14:12.019434 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:14:12.018322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:14:12.019834 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:14:12.021902 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:14:12.023209 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:14:12.030627 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:14:12.050272 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:14:12.058507 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:12.071099 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:14:12.074477 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:14:12.081047 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:14:12.093291 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:14:12.096593 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:14:12.098836 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:14:12.107957 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 20:14:12.110954 systemd-journald[1108]: Time spent on flushing to /var/log/journal/742f4d560a6441a69d4b8aad3a0a6a35 is 51.478ms for 993 entries. Feb 13 20:14:12.110954 systemd-journald[1108]: System Journal (/var/log/journal/742f4d560a6441a69d4b8aad3a0a6a35) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:14:12.171691 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 20:14:12.117077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:12.152601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:14:12.155337 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:14:12.159732 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:14:12.176711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:14:12.177275 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 20:14:12.202028 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:14:12.212177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:14:12.226458 kernel: loop3: detected capacity change from 0 to 140768 Feb 13 20:14:12.277932 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:14:12.278018 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 20:14:12.280081 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 20:14:12.280102 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 20:14:12.298304 kernel: loop6: detected capacity change from 0 to 142488 Feb 13 20:14:12.304464 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:12.326298 kernel: loop7: detected capacity change from 0 to 140768 Feb 13 20:14:12.345161 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:14:12.345935 (sd-merge)[1176]: Merged extensions into '/usr'. Feb 13 20:14:12.356570 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:14:12.356590 systemd[1]: Reloading... Feb 13 20:14:12.566278 zram_generator::config[1205]: No configuration found. Feb 13 20:14:12.774467 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:14:12.810657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:12.868067 systemd[1]: Reloading finished in 510 ms. Feb 13 20:14:12.915228 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:14:12.916073 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:14:12.927533 systemd[1]: Starting ensure-sysext.service... Feb 13 20:14:12.931150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:14:12.943384 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:14:12.943405 systemd[1]: Reloading... Feb 13 20:14:13.001694 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:14:13.003165 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:14:13.006310 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:14:13.006789 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 20:14:13.007391 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 20:14:13.014971 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:14:13.015297 systemd-tmpfiles[1248]: Skipping /boot Feb 13 20:14:13.037389 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:14:13.039580 systemd-tmpfiles[1248]: Skipping /boot Feb 13 20:14:13.040265 zram_generator::config[1274]: No configuration found. Feb 13 20:14:13.205504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:13.260357 systemd[1]: Reloading finished in 316 ms. Feb 13 20:14:13.278734 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:14:13.284849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:13.292423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:14:13.296458 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:14:13.300518 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:14:13.305473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:14:13.309582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:13.313450 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:14:13.320506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.320693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:13.328608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:13.330652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:13.340554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:13.341112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:13.341250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.344494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.344695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:13.344870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:13.344984 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.350344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.350611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:13.359618 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:14:13.360167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:13.360345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.379483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:14:13.380368 systemd[1]: Finished ensure-sysext.service. Feb 13 20:14:13.388933 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:14:13.396252 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:14:13.416695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:13.418808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:13.420504 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Feb 13 20:14:13.427636 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:14:13.428800 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:14:13.443066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:13.446924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:13.448657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:14:13.450923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:13.453122 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:13.454982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:13.456376 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:14:13.457281 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:14:13.459348 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:14:13.465100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:13.473457 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:14:13.483967 augenrules[1360]: No rules Feb 13 20:14:13.485479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:14:13.498333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:13.510541 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:14:13.541490 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:14:13.621023 systemd-resolved[1323]: Positive Trust Anchors: Feb 13 20:14:13.623289 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:14:13.623334 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:14:13.632356 systemd-resolved[1323]: Using system hostname 'ci-4081.3.1-9-4d1da4e47c'. Feb 13 20:14:13.635444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:14:13.635915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:13.647577 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:14:13.648096 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:14:13.658139 systemd-networkd[1367]: lo: Link UP Feb 13 20:14:13.658149 systemd-networkd[1367]: lo: Gained carrier Feb 13 20:14:13.659139 systemd-networkd[1367]: Enumeration completed Feb 13 20:14:13.659254 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:14:13.659764 systemd[1]: Reached target network.target - Network. Feb 13 20:14:13.669587 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:14:13.678878 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:14:13.681126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.681382 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:13.687593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:13.690619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:13.692790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:13.693512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:13.693579 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:14:13.693603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:13.724285 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:14:13.729168 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:14:13.739027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:13.739215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:13.744634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:13.744816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:13.747089 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:13.747263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:13.749228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:13.756129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:13.762300 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1379) Feb 13 20:14:13.763283 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:14:13.782264 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:14:13.788440 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:14:13.817489 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-ca:1f:64:9c:14:a2.network. Feb 13 20:14:13.818329 systemd-networkd[1367]: eth1: Link UP Feb 13 20:14:13.818338 systemd-networkd[1367]: eth1: Gained carrier Feb 13 20:14:13.821408 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:13.834373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:14:13.841490 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:14:13.843255 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:14:13.854496 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:14:13.870077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:14:13.898671 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-ba:7d:04:d4:b7:db.network. Feb 13 20:14:13.899164 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:13.900186 systemd-networkd[1367]: eth0: Link UP Feb 13 20:14:13.900196 systemd-networkd[1367]: eth0: Gained carrier Feb 13 20:14:13.906857 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:13.910384 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:13.912422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:13.921266 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:14:13.965302 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:14:13.965373 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:14:13.968263 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:14:13.969306 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:14:13.969374 kernel: [drm] features: -context_init Feb 13 20:14:13.970280 kernel: [drm] number of scanouts: 1 Feb 13 20:14:13.974266 kernel: [drm] number of cap sets: 0 Feb 13 20:14:13.979362 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:14:13.992503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:13.992955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:14.003985 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:14:14.004059 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:14:14.003417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:14.008340 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:14:14.025165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:14.025413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:14.028438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:14.112003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:14.121619 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:14:14.147814 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:14:14.154564 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:14:14.170664 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:14:14.203497 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:14:14.204691 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:14.204832 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:14:14.205001 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:14:14.205126 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:14:14.205454 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:14:14.205730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:14:14.205807 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:14:14.205917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:14:14.205941 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:14:14.205990 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:14:14.207926 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:14:14.209803 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:14:14.216386 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:14:14.217971 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:14:14.219056 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:14:14.219608 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:14:14.220096 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:14:14.223702 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:14:14.223734 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:14:14.232407 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:14:14.235476 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:14:14.244489 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:14:14.248907 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:14:14.256105 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:14:14.265464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:14:14.265985 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:14:14.275485 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:14:14.288076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:14:14.293228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:14:14.297478 coreos-metadata[1437]: Feb 13 20:14:14.296 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:14.307501 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:14:14.314746 coreos-metadata[1437]: Feb 13 20:14:14.313 INFO Fetch successful Feb 13 20:14:14.319476 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:14:14.321165 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:14:14.322791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:14:14.329509 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:14:14.329930 jq[1439]: false Feb 13 20:14:14.334403 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:14:14.334918 dbus-daemon[1438]: [system] SELinux support is enabled Feb 13 20:14:14.337750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:14:14.342464 jq[1451]: true Feb 13 20:14:14.343640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:14:14.354720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:14:14.354937 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:14:14.378170 extend-filesystems[1440]: Found loop4 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found loop5 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found loop6 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found loop7 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda1 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda2 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda3 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found usr Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda4 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda6 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda7 Feb 13 20:14:14.384432 extend-filesystems[1440]: Found vda9 Feb 13 20:14:14.384432 extend-filesystems[1440]: Checking size of /dev/vda9 Feb 13 20:14:14.380246 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:14:14.401997 update_engine[1450]: I20250213 20:14:14.379644 1450 main.cc:92] Flatcar Update Engine starting Feb 13 20:14:14.381410 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:14:14.389059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:14:14.389159 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:14:14.394057 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:14:14.394144 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:14:14.394169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:14:14.406776 tar[1456]: linux-amd64/LICENSE Feb 13 20:14:14.406776 tar[1456]: linux-amd64/helm Feb 13 20:14:14.411586 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:14:14.424639 update_engine[1450]: I20250213 20:14:14.424474 1450 update_check_scheduler.cc:74] Next update check in 4m37s Feb 13 20:14:14.425446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:14:14.427564 extend-filesystems[1440]: Resized partition /dev/vda9 Feb 13 20:14:14.438037 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:14:14.438114 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:14:14.440676 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:14:14.440879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:14:14.453786 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:14:14.464130 systemd-logind[1449]: New seat seat0. Feb 13 20:14:14.466670 jq[1457]: true Feb 13 20:14:14.469029 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:14:14.469055 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:14:14.470671 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:14:14.496579 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:14:14.500141 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:14:14.561502 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:14:14.581380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Feb 13 20:14:14.592905 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:14:14.592905 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:14:14.592905 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:14:14.604781 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Feb 13 20:14:14.604781 extend-filesystems[1440]: Found vdb Feb 13 20:14:14.593685 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:14:14.614390 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:14:14.593920 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:14:14.608311 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:14:14.628379 systemd[1]: Starting sshkeys.service... Feb 13 20:14:14.663194 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:14:14.673873 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:14:14.747657 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:14:14.833467 coreos-metadata[1507]: Feb 13 20:14:14.833 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:14.846686 coreos-metadata[1507]: Feb 13 20:14:14.845 INFO Fetch successful Feb 13 20:14:14.874021 unknown[1507]: wrote ssh authorized keys file for user: core Feb 13 20:14:14.943288 update-ssh-keys[1515]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:14:14.944139 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:14:14.950913 systemd[1]: Finished sshkeys.service. Feb 13 20:14:15.033266 containerd[1459]: time="2025-02-13T20:14:15.032978581Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:14:15.112595 containerd[1459]: time="2025-02-13T20:14:15.112533959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118399482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118453088Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118471601Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118636077Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118651846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118724580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118740518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118928744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118943268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118956345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120266 containerd[1459]: time="2025-02-13T20:14:15.118965948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119031758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119283395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119403478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119416733Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119506824Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:14:15.120587 containerd[1459]: time="2025-02-13T20:14:15.119552717Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125372693Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125434280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125450531Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125467501Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125482144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125661299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.125918520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126034921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126048963Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126060983Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126075040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126107000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126125017Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126271 containerd[1459]: time="2025-02-13T20:14:15.126140706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126647 containerd[1459]: time="2025-02-13T20:14:15.126154893Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126647 containerd[1459]: time="2025-02-13T20:14:15.126168195Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126647 containerd[1459]: time="2025-02-13T20:14:15.126181213Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126647 containerd[1459]: time="2025-02-13T20:14:15.126196624Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:14:15.126647 containerd[1459]: time="2025-02-13T20:14:15.126227234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128322462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128349544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128364662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128380331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128394940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128407737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128420488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128434831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128477383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128494599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128506748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128519947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128534716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128571534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.128649 containerd[1459]: time="2025-02-13T20:14:15.128585085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.129005 containerd[1459]: time="2025-02-13T20:14:15.128595748Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:14:15.129168 containerd[1459]: time="2025-02-13T20:14:15.129056039Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:14:15.129228 containerd[1459]: time="2025-02-13T20:14:15.129089692Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:14:15.129289 containerd[1459]: time="2025-02-13T20:14:15.129276708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:14:15.129363 containerd[1459]: time="2025-02-13T20:14:15.129347599Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:14:15.129409 containerd[1459]: time="2025-02-13T20:14:15.129400092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.129471 containerd[1459]: time="2025-02-13T20:14:15.129458370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:14:15.129516 containerd[1459]: time="2025-02-13T20:14:15.129507865Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:14:15.129680 containerd[1459]: time="2025-02-13T20:14:15.129666071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:14:15.132305 containerd[1459]: time="2025-02-13T20:14:15.130057334Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:14:15.132305 containerd[1459]: time="2025-02-13T20:14:15.130121062Z" level=info msg="Connect containerd service" Feb 13 20:14:15.132305 containerd[1459]: time="2025-02-13T20:14:15.130160968Z" level=info msg="using legacy CRI server" Feb 13 20:14:15.132305 containerd[1459]: time="2025-02-13T20:14:15.130169265Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:14:15.132305 containerd[1459]: time="2025-02-13T20:14:15.130318419Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:14:15.135360 containerd[1459]: time="2025-02-13T20:14:15.135293889Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:14:15.135581 containerd[1459]: time="2025-02-13T20:14:15.135548893Z" level=info msg="Start subscribing containerd event" Feb 13 20:14:15.136118 containerd[1459]: time="2025-02-13T20:14:15.135648059Z" level=info msg="Start recovering state" Feb 13 20:14:15.136118 containerd[1459]: time="2025-02-13T20:14:15.135725464Z" level=info msg="Start event monitor" Feb 13 20:14:15.136118 containerd[1459]: time="2025-02-13T20:14:15.135742880Z" level=info msg="Start snapshots syncer" Feb 13 20:14:15.136118 containerd[1459]: time="2025-02-13T20:14:15.135752263Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:14:15.136118 containerd[1459]: time="2025-02-13T20:14:15.135759673Z" level=info msg="Start streaming server" Feb 13 20:14:15.139444 containerd[1459]: time="2025-02-13T20:14:15.136509319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:14:15.139444 containerd[1459]: time="2025-02-13T20:14:15.136562758Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:14:15.136731 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:14:15.143592 containerd[1459]: time="2025-02-13T20:14:15.143505220Z" level=info msg="containerd successfully booted in 0.117380s" Feb 13 20:14:15.231428 systemd-networkd[1367]: eth1: Gained IPv6LL Feb 13 20:14:15.231872 systemd-networkd[1367]: eth0: Gained IPv6LL Feb 13 20:14:15.233399 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:15.234293 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:14:15.239581 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:14:15.253503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:15.263353 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:14:15.316685 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:14:15.350560 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:14:15.383675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:14:15.395614 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:14:15.415110 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:14:15.415329 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:14:15.424786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:14:15.465302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:14:15.477815 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:14:15.488551 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:14:15.489232 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:14:15.519819 tar[1456]: linux-amd64/README.md Feb 13 20:14:15.535893 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:14:16.355995 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:14:16.365324 systemd[1]: Started sshd@0-146.190.40.231:22-147.75.109.163:47586.service - OpenSSH per-connection server daemon (147.75.109.163:47586). Feb 13 20:14:16.433451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:16.435538 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:14:16.439891 systemd[1]: Startup finished in 1.080s (kernel) + 5.161s (initrd) + 5.592s (userspace) = 11.834s. Feb 13 20:14:16.445148 sshd[1556]: Accepted publickey for core from 147.75.109.163 port 47586 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:16.447852 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:16.450331 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:14:16.475334 systemd-logind[1449]: New session 1 of user core. Feb 13 20:14:16.475619 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:14:16.482782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:14:16.506806 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:14:16.515698 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:14:16.529293 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:14:16.657704 systemd[1570]: Queued start job for default target default.target. Feb 13 20:14:16.673534 systemd[1570]: Created slice app.slice - User Application Slice. Feb 13 20:14:16.673569 systemd[1570]: Reached target paths.target - Paths. Feb 13 20:14:16.673585 systemd[1570]: Reached target timers.target - Timers. Feb 13 20:14:16.677414 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:14:16.690808 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:14:16.690884 systemd[1570]: Reached target sockets.target - Sockets. Feb 13 20:14:16.690899 systemd[1570]: Reached target basic.target - Basic System. Feb 13 20:14:16.690949 systemd[1570]: Reached target default.target - Main User Target. Feb 13 20:14:16.690981 systemd[1570]: Startup finished in 153ms. Feb 13 20:14:16.691231 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:14:16.699542 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:14:16.773732 systemd[1]: Started sshd@1-146.190.40.231:22-147.75.109.163:47596.service - OpenSSH per-connection server daemon (147.75.109.163:47596). Feb 13 20:14:16.838882 sshd[1585]: Accepted publickey for core from 147.75.109.163 port 47596 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:16.839849 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:16.847063 systemd-logind[1449]: New session 2 of user core. Feb 13 20:14:16.852662 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:14:16.921200 sshd[1585]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:16.930954 systemd[1]: sshd@1-146.190.40.231:22-147.75.109.163:47596.service: Deactivated successfully. Feb 13 20:14:16.933380 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:14:16.935756 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:14:16.944714 systemd[1]: Started sshd@2-146.190.40.231:22-147.75.109.163:47604.service - OpenSSH per-connection server daemon (147.75.109.163:47604). Feb 13 20:14:16.946709 systemd-logind[1449]: Removed session 2. Feb 13 20:14:16.995101 sshd[1592]: Accepted publickey for core from 147.75.109.163 port 47604 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:16.997380 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:17.005362 systemd-logind[1449]: New session 3 of user core. Feb 13 20:14:17.010459 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:14:17.070747 sshd[1592]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:17.082103 systemd[1]: sshd@2-146.190.40.231:22-147.75.109.163:47604.service: Deactivated successfully. Feb 13 20:14:17.086005 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:14:17.088564 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:14:17.097837 systemd[1]: Started sshd@3-146.190.40.231:22-147.75.109.163:47614.service - OpenSSH per-connection server daemon (147.75.109.163:47614). Feb 13 20:14:17.100326 systemd-logind[1449]: Removed session 3. Feb 13 20:14:17.151750 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 47614 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:17.154082 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:17.162677 systemd-logind[1449]: New session 4 of user core. Feb 13 20:14:17.166474 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:14:17.177662 kubelet[1563]: E0213 20:14:17.177494 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:14:17.180857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:14:17.181036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:14:17.181409 systemd[1]: kubelet.service: Consumed 1.197s CPU time. Feb 13 20:14:17.232919 sshd[1600]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:17.241257 systemd[1]: sshd@3-146.190.40.231:22-147.75.109.163:47614.service: Deactivated successfully. Feb 13 20:14:17.243305 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:14:17.245487 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:14:17.255341 systemd[1]: Started sshd@4-146.190.40.231:22-147.75.109.163:47628.service - OpenSSH per-connection server daemon (147.75.109.163:47628). Feb 13 20:14:17.256924 systemd-logind[1449]: Removed session 4. Feb 13 20:14:17.294764 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 47628 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:17.296744 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:17.303397 systemd-logind[1449]: New session 5 of user core. Feb 13 20:14:17.310516 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:14:17.379813 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:14:17.380214 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:14:17.394016 sudo[1611]: pam_unix(sudo:session): session closed for user root Feb 13 20:14:17.397950 sshd[1608]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:17.413097 systemd[1]: sshd@4-146.190.40.231:22-147.75.109.163:47628.service: Deactivated successfully. Feb 13 20:14:17.415332 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:14:17.417547 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:14:17.423631 systemd[1]: Started sshd@5-146.190.40.231:22-147.75.109.163:47632.service - OpenSSH per-connection server daemon (147.75.109.163:47632). Feb 13 20:14:17.425133 systemd-logind[1449]: Removed session 5. Feb 13 20:14:17.469562 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 47632 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:17.471924 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:17.477193 systemd-logind[1449]: New session 6 of user core. Feb 13 20:14:17.485630 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:14:17.546408 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:14:17.546713 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:14:17.551665 sudo[1620]: pam_unix(sudo:session): session closed for user root Feb 13 20:14:17.558766 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:14:17.559157 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:14:17.574597 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:14:17.589356 auditctl[1623]: No rules Feb 13 20:14:17.590076 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:14:17.590423 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:14:17.600827 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:14:17.632801 augenrules[1641]: No rules Feb 13 20:14:17.634476 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:14:17.636088 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 20:14:17.640122 sshd[1616]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:17.654446 systemd[1]: sshd@5-146.190.40.231:22-147.75.109.163:47632.service: Deactivated successfully. Feb 13 20:14:17.657890 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:14:17.661534 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:14:17.667713 systemd[1]: Started sshd@6-146.190.40.231:22-147.75.109.163:47638.service - OpenSSH per-connection server daemon (147.75.109.163:47638). Feb 13 20:14:17.669809 systemd-logind[1449]: Removed session 6. Feb 13 20:14:17.723283 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 47638 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:17.725369 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:17.730926 systemd-logind[1449]: New session 7 of user core. Feb 13 20:14:17.742565 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:14:17.802453 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:14:17.802784 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:14:18.292755 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:14:18.295889 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:14:18.686628 dockerd[1668]: time="2025-02-13T20:14:18.686560387Z" level=info msg="Starting up" Feb 13 20:14:18.795188 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3640684048-merged.mount: Deactivated successfully. Feb 13 20:14:18.868013 dockerd[1668]: time="2025-02-13T20:14:18.867967233Z" level=info msg="Loading containers: start." Feb 13 20:14:18.991262 kernel: Initializing XFRM netlink socket Feb 13 20:14:19.022610 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:19.023871 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:19.088646 systemd-networkd[1367]: docker0: Link UP Feb 13 20:14:19.089410 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Feb 13 20:14:19.109510 dockerd[1668]: time="2025-02-13T20:14:19.109348173Z" level=info msg="Loading containers: done." Feb 13 20:14:19.130628 dockerd[1668]: time="2025-02-13T20:14:19.130551172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:14:19.130801 dockerd[1668]: time="2025-02-13T20:14:19.130704841Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:14:19.130856 dockerd[1668]: time="2025-02-13T20:14:19.130824649Z" level=info msg="Daemon has completed initialization" Feb 13 20:14:19.164817 dockerd[1668]: time="2025-02-13T20:14:19.164335523Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:14:19.164674 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:14:19.792842 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1845959054-merged.mount: Deactivated successfully. Feb 13 20:14:19.845117 containerd[1459]: time="2025-02-13T20:14:19.845042787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:14:20.397025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2879481177.mount: Deactivated successfully. Feb 13 20:14:21.592228 containerd[1459]: time="2025-02-13T20:14:21.592160625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:21.593362 containerd[1459]: time="2025-02-13T20:14:21.593258581Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 20:14:21.594112 containerd[1459]: time="2025-02-13T20:14:21.594077657Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:21.596752 containerd[1459]: time="2025-02-13T20:14:21.596700103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:21.598024 containerd[1459]: time="2025-02-13T20:14:21.597840357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.752753087s" Feb 13 20:14:21.598024 containerd[1459]: time="2025-02-13T20:14:21.597878966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 20:14:21.598902 containerd[1459]: time="2025-02-13T20:14:21.598713635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:14:22.950923 containerd[1459]: time="2025-02-13T20:14:22.950859884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:22.951963 containerd[1459]: time="2025-02-13T20:14:22.951914154Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 20:14:22.952698 containerd[1459]: time="2025-02-13T20:14:22.952476393Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:22.955891 containerd[1459]: time="2025-02-13T20:14:22.955835807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:22.957851 containerd[1459]: time="2025-02-13T20:14:22.957327485Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.358579672s" Feb 13 20:14:22.957851 containerd[1459]: time="2025-02-13T20:14:22.957375464Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 20:14:22.958577 containerd[1459]: time="2025-02-13T20:14:22.958550271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:14:24.108028 containerd[1459]: time="2025-02-13T20:14:24.106769677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:24.108028 containerd[1459]: time="2025-02-13T20:14:24.107698913Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 20:14:24.108028 containerd[1459]: time="2025-02-13T20:14:24.107971574Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:24.110973 containerd[1459]: time="2025-02-13T20:14:24.110930957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:24.112065 containerd[1459]: time="2025-02-13T20:14:24.112027488Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.153444568s" Feb 13 20:14:24.112210 containerd[1459]: time="2025-02-13T20:14:24.112193178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 20:14:24.112800 containerd[1459]: time="2025-02-13T20:14:24.112681850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:14:25.257653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550958033.mount: Deactivated successfully. Feb 13 20:14:25.716521 containerd[1459]: time="2025-02-13T20:14:25.716449420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:25.717383 containerd[1459]: time="2025-02-13T20:14:25.717334695Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 20:14:25.718212 containerd[1459]: time="2025-02-13T20:14:25.718053519Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:25.720513 containerd[1459]: time="2025-02-13T20:14:25.720441013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:25.721203 containerd[1459]: time="2025-02-13T20:14:25.720999105Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.60816373s" Feb 13 20:14:25.721203 containerd[1459]: time="2025-02-13T20:14:25.721036420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:14:25.722054 containerd[1459]: time="2025-02-13T20:14:25.721753298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:14:26.103701 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:14:26.251784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288307770.mount: Deactivated successfully. Feb 13 20:14:27.123145 containerd[1459]: time="2025-02-13T20:14:27.123064740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.124292 containerd[1459]: time="2025-02-13T20:14:27.124225210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 20:14:27.125187 containerd[1459]: time="2025-02-13T20:14:27.124824508Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.129529 containerd[1459]: time="2025-02-13T20:14:27.129342019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.130815 containerd[1459]: time="2025-02-13T20:14:27.130756622Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.408969441s" Feb 13 20:14:27.130815 containerd[1459]: time="2025-02-13T20:14:27.130814128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 20:14:27.131687 containerd[1459]: time="2025-02-13T20:14:27.131400790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:14:27.421780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:14:27.427832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:27.604489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:27.611738 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:14:27.624687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225538093.mount: Deactivated successfully. Feb 13 20:14:27.631269 containerd[1459]: time="2025-02-13T20:14:27.630398189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.631668 containerd[1459]: time="2025-02-13T20:14:27.631624794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:14:27.632392 containerd[1459]: time="2025-02-13T20:14:27.632362846Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.636795 containerd[1459]: time="2025-02-13T20:14:27.636748773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:27.638414 containerd[1459]: time="2025-02-13T20:14:27.637735783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.301009ms" Feb 13 20:14:27.638414 containerd[1459]: time="2025-02-13T20:14:27.637773091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:14:27.638603 containerd[1459]: time="2025-02-13T20:14:27.638512741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:14:27.677970 kubelet[1942]: E0213 20:14:27.677819 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:14:27.683275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:14:27.683455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:14:28.190884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274911744.mount: Deactivated successfully. Feb 13 20:14:29.182455 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:14:30.008412 containerd[1459]: time="2025-02-13T20:14:30.008323038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:30.010007 containerd[1459]: time="2025-02-13T20:14:30.009406782Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 20:14:30.010987 containerd[1459]: time="2025-02-13T20:14:30.010901129Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:30.014541 containerd[1459]: time="2025-02-13T20:14:30.014474179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:30.016759 containerd[1459]: time="2025-02-13T20:14:30.016532555Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.377990991s" Feb 13 20:14:30.016759 containerd[1459]: time="2025-02-13T20:14:30.016577978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 20:14:33.428039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:33.434584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:33.472390 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-7.scope)... Feb 13 20:14:33.472554 systemd[1]: Reloading... Feb 13 20:14:33.593269 zram_generator::config[2074]: No configuration found. Feb 13 20:14:33.724013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:33.802273 systemd[1]: Reloading finished in 329 ms. Feb 13 20:14:33.852560 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:14:33.852812 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:14:33.853039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:33.866692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:33.995317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:34.006681 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:14:34.066667 kubelet[2128]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:34.066667 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:14:34.066667 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:34.067163 kubelet[2128]: I0213 20:14:34.066724 2128 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:14:34.339990 kubelet[2128]: I0213 20:14:34.338701 2128 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:14:34.339990 kubelet[2128]: I0213 20:14:34.338739 2128 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:14:34.339990 kubelet[2128]: I0213 20:14:34.339196 2128 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:14:34.364536 kubelet[2128]: I0213 20:14:34.364446 2128 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:14:34.366278 kubelet[2128]: E0213 20:14:34.366158 2128 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.40.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:34.372687 kubelet[2128]: E0213 20:14:34.372641 2128 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:14:34.372687 kubelet[2128]: I0213 20:14:34.372679 2128 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:14:34.378915 kubelet[2128]: I0213 20:14:34.378865 2128 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:14:34.380622 kubelet[2128]: I0213 20:14:34.380546 2128 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:14:34.380832 kubelet[2128]: I0213 20:14:34.380617 2128 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-9-4d1da4e47c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:14:34.380982 kubelet[2128]: I0213 20:14:34.380839 2128 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:14:34.380982 kubelet[2128]: I0213 20:14:34.380851 2128 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:14:34.382275 kubelet[2128]: I0213 20:14:34.382197 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:34.386094 kubelet[2128]: I0213 20:14:34.386054 2128 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:14:34.386094 kubelet[2128]: I0213 20:14:34.386093 2128 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:14:34.386340 kubelet[2128]: I0213 20:14:34.386118 2128 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:14:34.386340 kubelet[2128]: I0213 20:14:34.386131 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:14:34.392374 kubelet[2128]: W0213 20:14:34.391821 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.40.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:34.392374 kubelet[2128]: E0213 20:14:34.391898 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.40.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:34.392374 kubelet[2128]: W0213 20:14:34.392172 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.40.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-9-4d1da4e47c&limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:34.392374 kubelet[2128]: E0213 20:14:34.392215 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.40.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-9-4d1da4e47c&limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:34.392780 kubelet[2128]: I0213 20:14:34.392664 2128 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:14:34.396178 kubelet[2128]: I0213 20:14:34.395674 2128 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:14:34.396178 kubelet[2128]: W0213 20:14:34.395764 2128 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:14:34.397198 kubelet[2128]: I0213 20:14:34.396870 2128 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:14:34.397198 kubelet[2128]: I0213 20:14:34.396903 2128 server.go:1287] "Started kubelet" Feb 13 20:14:34.402363 kubelet[2128]: E0213 20:14:34.400905 2128 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.40.231:6443/api/v1/namespaces/default/events\": dial tcp 146.190.40.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-9-4d1da4e47c.1823ddba8d261b05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-9-4d1da4e47c,UID:ci-4081.3.1-9-4d1da4e47c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-9-4d1da4e47c,},FirstTimestamp:2025-02-13 20:14:34.396883717 +0000 UTC m=+0.384797291,LastTimestamp:2025-02-13 20:14:34.396883717 +0000 UTC m=+0.384797291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-9-4d1da4e47c,}" Feb 13 20:14:34.405710 kubelet[2128]: I0213 20:14:34.405684 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:14:34.406158 kubelet[2128]: I0213 20:14:34.406058 2128 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:14:34.406403 kubelet[2128]: I0213 20:14:34.406385 2128 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:14:34.411186 kubelet[2128]: I0213 20:14:34.411129 2128 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:14:34.411503 kubelet[2128]: I0213 20:14:34.411483 2128 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:14:34.413648 kubelet[2128]: I0213 20:14:34.413612 2128 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:14:34.414377 kubelet[2128]: I0213 20:14:34.414349 2128 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:14:34.414739 kubelet[2128]: E0213 20:14:34.414708 2128 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" Feb 13 20:14:34.416582 kubelet[2128]: I0213 20:14:34.416555 2128 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:14:34.416657 kubelet[2128]: I0213 20:14:34.416627 2128 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:14:34.418200 kubelet[2128]: E0213 20:14:34.417213 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.40.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-9-4d1da4e47c?timeout=10s\": dial tcp 146.190.40.231:6443: connect: connection refused" interval="200ms" Feb 13 20:14:34.418200 kubelet[2128]: W0213 20:14:34.417627 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.40.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:34.418200 kubelet[2128]: E0213 20:14:34.417680 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.40.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:34.418831 kubelet[2128]: I0213 20:14:34.418808 2128 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:14:34.418901 kubelet[2128]: I0213 20:14:34.418887 2128 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:14:34.421297 kubelet[2128]: I0213 20:14:34.421274 2128 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:14:34.432300 kubelet[2128]: I0213 20:14:34.432119 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:14:34.433748 kubelet[2128]: I0213 20:14:34.433644 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:14:34.435261 kubelet[2128]: I0213 20:14:34.435209 2128 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:14:34.435261 kubelet[2128]: I0213 20:14:34.435261 2128 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:14:34.435366 kubelet[2128]: I0213 20:14:34.435271 2128 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:14:34.435366 kubelet[2128]: E0213 20:14:34.435333 2128 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:14:34.444993 kubelet[2128]: E0213 20:14:34.444962 2128 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:14:34.445255 kubelet[2128]: W0213 20:14:34.445153 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.40.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:34.445362 kubelet[2128]: E0213 20:14:34.445326 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.40.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:34.452122 kubelet[2128]: I0213 20:14:34.451988 2128 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:14:34.452122 kubelet[2128]: I0213 20:14:34.452079 2128 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:14:34.452122 kubelet[2128]: I0213 20:14:34.452101 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:34.453839 kubelet[2128]: I0213 20:14:34.453804 2128 policy_none.go:49] "None policy: Start" Feb 13 20:14:34.453839 kubelet[2128]: I0213 20:14:34.453829 2128 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:14:34.453839 kubelet[2128]: I0213 20:14:34.453840 2128 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:14:34.459357 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:14:34.468916 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:14:34.472798 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:14:34.482019 kubelet[2128]: I0213 20:14:34.481471 2128 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:14:34.482019 kubelet[2128]: I0213 20:14:34.481707 2128 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:14:34.482019 kubelet[2128]: I0213 20:14:34.481721 2128 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:14:34.482019 kubelet[2128]: I0213 20:14:34.481974 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:14:34.484684 kubelet[2128]: E0213 20:14:34.484329 2128 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:14:34.484684 kubelet[2128]: E0213 20:14:34.484394 2128 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-9-4d1da4e47c\" not found" Feb 13 20:14:34.546021 systemd[1]: Created slice kubepods-burstable-pod607ecc46cc3a7de981bf51d2389071f6.slice - libcontainer container kubepods-burstable-pod607ecc46cc3a7de981bf51d2389071f6.slice. Feb 13 20:14:34.563345 kubelet[2128]: E0213 20:14:34.563078 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.566422 systemd[1]: Created slice kubepods-burstable-podee17e95dc072d75e2c40a8b9c5bc9aed.slice - libcontainer container kubepods-burstable-podee17e95dc072d75e2c40a8b9c5bc9aed.slice. Feb 13 20:14:34.575156 kubelet[2128]: E0213 20:14:34.574998 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.578157 systemd[1]: Created slice kubepods-burstable-pod9c4cca2d8b8f2aef65c74724ef18d1fd.slice - libcontainer container kubepods-burstable-pod9c4cca2d8b8f2aef65c74724ef18d1fd.slice. Feb 13 20:14:34.580032 kubelet[2128]: E0213 20:14:34.580009 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.584776 kubelet[2128]: I0213 20:14:34.584727 2128 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.585212 kubelet[2128]: E0213 20:14:34.585177 2128 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://146.190.40.231:6443/api/v1/nodes\": dial tcp 146.190.40.231:6443: connect: connection refused" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.618269 kubelet[2128]: E0213 20:14:34.618076 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.40.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-9-4d1da4e47c?timeout=10s\": dial tcp 146.190.40.231:6443: connect: connection refused" interval="400ms" Feb 13 20:14:34.717932 kubelet[2128]: I0213 20:14:34.717812 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.717932 kubelet[2128]: I0213 20:14:34.717868 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.717932 kubelet[2128]: I0213 20:14:34.717886 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.717932 kubelet[2128]: I0213 20:14:34.717902 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.717932 kubelet[2128]: I0213 20:14:34.717918 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c4cca2d8b8f2aef65c74724ef18d1fd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-9-4d1da4e47c\" (UID: \"9c4cca2d8b8f2aef65c74724ef18d1fd\") " pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.718270 kubelet[2128]: I0213 20:14:34.717934 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.718270 kubelet[2128]: I0213 20:14:34.717950 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.718270 kubelet[2128]: I0213 20:14:34.717968 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.718270 kubelet[2128]: I0213 20:14:34.718008 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.787293 kubelet[2128]: I0213 20:14:34.786792 2128 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.787293 kubelet[2128]: E0213 20:14:34.787141 2128 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://146.190.40.231:6443/api/v1/nodes\": dial tcp 146.190.40.231:6443: connect: connection refused" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:34.864300 kubelet[2128]: E0213 20:14:34.864209 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:34.865263 containerd[1459]: time="2025-02-13T20:14:34.865162610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-9-4d1da4e47c,Uid:607ecc46cc3a7de981bf51d2389071f6,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:34.866760 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 20:14:34.876784 kubelet[2128]: E0213 20:14:34.876324 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:34.877006 containerd[1459]: time="2025-02-13T20:14:34.876952004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-9-4d1da4e47c,Uid:ee17e95dc072d75e2c40a8b9c5bc9aed,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:34.882575 kubelet[2128]: E0213 20:14:34.881730 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:34.882710 containerd[1459]: time="2025-02-13T20:14:34.882199015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-9-4d1da4e47c,Uid:9c4cca2d8b8f2aef65c74724ef18d1fd,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:35.019276 kubelet[2128]: E0213 20:14:35.019138 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.40.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-9-4d1da4e47c?timeout=10s\": dial tcp 146.190.40.231:6443: connect: connection refused" interval="800ms" Feb 13 20:14:35.189388 kubelet[2128]: I0213 20:14:35.189005 2128 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:35.190051 kubelet[2128]: E0213 20:14:35.190010 2128 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://146.190.40.231:6443/api/v1/nodes\": dial tcp 146.190.40.231:6443: connect: connection refused" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:35.320728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426805598.mount: Deactivated successfully. Feb 13 20:14:35.326696 containerd[1459]: time="2025-02-13T20:14:35.326627785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:35.327557 containerd[1459]: time="2025-02-13T20:14:35.327527485Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:35.328621 containerd[1459]: time="2025-02-13T20:14:35.328582748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:14:35.328812 containerd[1459]: time="2025-02-13T20:14:35.328711333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:14:35.329900 containerd[1459]: time="2025-02-13T20:14:35.329789745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:14:35.329900 containerd[1459]: time="2025-02-13T20:14:35.329864985Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:35.332467 containerd[1459]: time="2025-02-13T20:14:35.332407962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:35.333427 containerd[1459]: time="2025-02-13T20:14:35.333391990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.084799ms" Feb 13 20:14:35.336265 containerd[1459]: time="2025-02-13T20:14:35.335332808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.045949ms" Feb 13 20:14:35.336265 containerd[1459]: time="2025-02-13T20:14:35.335958667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:14:35.336968 containerd[1459]: time="2025-02-13T20:14:35.336941841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.861907ms" Feb 13 20:14:35.496823 kubelet[2128]: W0213 20:14:35.496653 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.40.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:35.496823 kubelet[2128]: E0213 20:14:35.496703 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.40.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:35.518227 containerd[1459]: time="2025-02-13T20:14:35.517810830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:35.518227 containerd[1459]: time="2025-02-13T20:14:35.517883801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:35.518227 containerd[1459]: time="2025-02-13T20:14:35.517904454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.518227 containerd[1459]: time="2025-02-13T20:14:35.518000094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.535557 containerd[1459]: time="2025-02-13T20:14:35.535389324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:35.535683 containerd[1459]: time="2025-02-13T20:14:35.535516331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:35.535683 containerd[1459]: time="2025-02-13T20:14:35.535534714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.535683 containerd[1459]: time="2025-02-13T20:14:35.535616659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.550176 containerd[1459]: time="2025-02-13T20:14:35.548404395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:35.550176 containerd[1459]: time="2025-02-13T20:14:35.548473541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:35.550176 containerd[1459]: time="2025-02-13T20:14:35.548484928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.550176 containerd[1459]: time="2025-02-13T20:14:35.548574378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:35.552882 systemd[1]: Started cri-containerd-cb81a5984f31141cb696ed498fa8c0eb368898fe311600f2d415d1db14d2d6ae.scope - libcontainer container cb81a5984f31141cb696ed498fa8c0eb368898fe311600f2d415d1db14d2d6ae. Feb 13 20:14:35.585508 systemd[1]: Started cri-containerd-d0f850de5f554b1da9c780086b0dd8d505e8cb63a401181b414944d3478382a3.scope - libcontainer container d0f850de5f554b1da9c780086b0dd8d505e8cb63a401181b414944d3478382a3. Feb 13 20:14:35.586434 kubelet[2128]: W0213 20:14:35.586325 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.40.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-9-4d1da4e47c&limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:35.586434 kubelet[2128]: E0213 20:14:35.586394 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.40.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-9-4d1da4e47c&limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:35.590670 systemd[1]: Started cri-containerd-c8e620674977c5bcc86f35f46b038325e1d401cc9bf098a9f6878be153c8bb79.scope - libcontainer container c8e620674977c5bcc86f35f46b038325e1d401cc9bf098a9f6878be153c8bb79. Feb 13 20:14:35.644465 containerd[1459]: time="2025-02-13T20:14:35.643815453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-9-4d1da4e47c,Uid:ee17e95dc072d75e2c40a8b9c5bc9aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb81a5984f31141cb696ed498fa8c0eb368898fe311600f2d415d1db14d2d6ae\"" Feb 13 20:14:35.650920 kubelet[2128]: E0213 20:14:35.650658 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:35.656035 containerd[1459]: time="2025-02-13T20:14:35.655986528Z" level=info msg="CreateContainer within sandbox \"cb81a5984f31141cb696ed498fa8c0eb368898fe311600f2d415d1db14d2d6ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:14:35.657794 kubelet[2128]: W0213 20:14:35.657584 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.40.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:35.657794 kubelet[2128]: E0213 20:14:35.657762 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.40.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:35.670139 containerd[1459]: time="2025-02-13T20:14:35.669529740Z" level=info msg="CreateContainer within sandbox \"cb81a5984f31141cb696ed498fa8c0eb368898fe311600f2d415d1db14d2d6ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c41a950c7cce2ccc91e4733482618d63efb1e6da27f7d13313b37df04504b9d\"" Feb 13 20:14:35.670709 containerd[1459]: time="2025-02-13T20:14:35.670680365Z" level=info msg="StartContainer for \"6c41a950c7cce2ccc91e4733482618d63efb1e6da27f7d13313b37df04504b9d\"" Feb 13 20:14:35.679442 containerd[1459]: time="2025-02-13T20:14:35.679376931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-9-4d1da4e47c,Uid:9c4cca2d8b8f2aef65c74724ef18d1fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8e620674977c5bcc86f35f46b038325e1d401cc9bf098a9f6878be153c8bb79\"" Feb 13 20:14:35.681782 containerd[1459]: time="2025-02-13T20:14:35.681698330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-9-4d1da4e47c,Uid:607ecc46cc3a7de981bf51d2389071f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0f850de5f554b1da9c780086b0dd8d505e8cb63a401181b414944d3478382a3\"" Feb 13 20:14:35.682549 kubelet[2128]: E0213 20:14:35.682283 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:35.682549 kubelet[2128]: E0213 20:14:35.682364 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:35.684501 containerd[1459]: time="2025-02-13T20:14:35.684470840Z" level=info msg="CreateContainer within sandbox \"d0f850de5f554b1da9c780086b0dd8d505e8cb63a401181b414944d3478382a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:14:35.686970 containerd[1459]: time="2025-02-13T20:14:35.686942587Z" level=info msg="CreateContainer within sandbox \"c8e620674977c5bcc86f35f46b038325e1d401cc9bf098a9f6878be153c8bb79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:14:35.710820 containerd[1459]: time="2025-02-13T20:14:35.710780516Z" level=info msg="CreateContainer within sandbox \"d0f850de5f554b1da9c780086b0dd8d505e8cb63a401181b414944d3478382a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6dbdd2aadee567caa7b8b28c078277a648eeb99e214196a02fc766b50480b2fb\"" Feb 13 20:14:35.711179 containerd[1459]: time="2025-02-13T20:14:35.711158680Z" level=info msg="StartContainer for \"6dbdd2aadee567caa7b8b28c078277a648eeb99e214196a02fc766b50480b2fb\"" Feb 13 20:14:35.714442 systemd[1]: Started cri-containerd-6c41a950c7cce2ccc91e4733482618d63efb1e6da27f7d13313b37df04504b9d.scope - libcontainer container 6c41a950c7cce2ccc91e4733482618d63efb1e6da27f7d13313b37df04504b9d. Feb 13 20:14:35.715883 containerd[1459]: time="2025-02-13T20:14:35.715845078Z" level=info msg="CreateContainer within sandbox \"c8e620674977c5bcc86f35f46b038325e1d401cc9bf098a9f6878be153c8bb79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"feb878aa3589dba0b88262a971ebe354bfb4c11bda4b49fce2eb918f87eb97a5\"" Feb 13 20:14:35.716313 containerd[1459]: time="2025-02-13T20:14:35.716283295Z" level=info msg="StartContainer for \"feb878aa3589dba0b88262a971ebe354bfb4c11bda4b49fce2eb918f87eb97a5\"" Feb 13 20:14:35.765647 systemd[1]: Started cri-containerd-6dbdd2aadee567caa7b8b28c078277a648eeb99e214196a02fc766b50480b2fb.scope - libcontainer container 6dbdd2aadee567caa7b8b28c078277a648eeb99e214196a02fc766b50480b2fb. Feb 13 20:14:35.775495 systemd[1]: Started cri-containerd-feb878aa3589dba0b88262a971ebe354bfb4c11bda4b49fce2eb918f87eb97a5.scope - libcontainer container feb878aa3589dba0b88262a971ebe354bfb4c11bda4b49fce2eb918f87eb97a5. Feb 13 20:14:35.790527 containerd[1459]: time="2025-02-13T20:14:35.790484580Z" level=info msg="StartContainer for \"6c41a950c7cce2ccc91e4733482618d63efb1e6da27f7d13313b37df04504b9d\" returns successfully" Feb 13 20:14:35.820773 kubelet[2128]: E0213 20:14:35.820327 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.40.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-9-4d1da4e47c?timeout=10s\": dial tcp 146.190.40.231:6443: connect: connection refused" interval="1.6s" Feb 13 20:14:35.834880 kubelet[2128]: W0213 20:14:35.834757 2128 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.40.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.40.231:6443: connect: connection refused Feb 13 20:14:35.834880 kubelet[2128]: E0213 20:14:35.834835 2128 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.40.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.40.231:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:14:35.862371 containerd[1459]: time="2025-02-13T20:14:35.862326445Z" level=info msg="StartContainer for \"6dbdd2aadee567caa7b8b28c078277a648eeb99e214196a02fc766b50480b2fb\" returns successfully" Feb 13 20:14:35.862959 containerd[1459]: time="2025-02-13T20:14:35.862898595Z" level=info msg="StartContainer for \"feb878aa3589dba0b88262a971ebe354bfb4c11bda4b49fce2eb918f87eb97a5\" returns successfully" Feb 13 20:14:35.991503 kubelet[2128]: I0213 20:14:35.991473 2128 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:35.991902 kubelet[2128]: E0213 20:14:35.991874 2128 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://146.190.40.231:6443/api/v1/nodes\": dial tcp 146.190.40.231:6443: connect: connection refused" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:36.458089 kubelet[2128]: E0213 20:14:36.458039 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:36.459686 kubelet[2128]: E0213 20:14:36.458223 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:36.462977 kubelet[2128]: E0213 20:14:36.462948 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:36.463250 kubelet[2128]: E0213 20:14:36.463145 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:36.464380 kubelet[2128]: E0213 20:14:36.464344 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:36.464487 kubelet[2128]: E0213 20:14:36.464453 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:37.468041 kubelet[2128]: E0213 20:14:37.467998 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.468565 kubelet[2128]: E0213 20:14:37.468168 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:37.469071 kubelet[2128]: E0213 20:14:37.469036 2128 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.469191 kubelet[2128]: E0213 20:14:37.469175 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:37.594270 kubelet[2128]: I0213 20:14:37.593847 2128 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.750935 kubelet[2128]: E0213 20:14:37.750822 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-9-4d1da4e47c\" not found" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.783707 kubelet[2128]: I0213 20:14:37.783514 2128 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.783707 kubelet[2128]: E0213 20:14:37.783564 2128 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-9-4d1da4e47c\": node \"ci-4081.3.1-9-4d1da4e47c\" not found" Feb 13 20:14:37.816049 kubelet[2128]: I0213 20:14:37.816005 2128 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.828655 kubelet[2128]: E0213 20:14:37.828409 2128 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.828655 kubelet[2128]: I0213 20:14:37.828444 2128 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.831674 kubelet[2128]: E0213 20:14:37.831636 2128 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.833265 kubelet[2128]: I0213 20:14:37.831843 2128 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:37.834385 kubelet[2128]: E0213 20:14:37.834332 2128 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-9-4d1da4e47c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:38.391315 kubelet[2128]: I0213 20:14:38.391044 2128 apiserver.go:52] "Watching apiserver" Feb 13 20:14:38.417558 kubelet[2128]: I0213 20:14:38.417503 2128 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:14:39.953943 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-7.scope)... Feb 13 20:14:39.953962 systemd[1]: Reloading... Feb 13 20:14:40.081267 zram_generator::config[2439]: No configuration found. Feb 13 20:14:40.202702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:40.294716 systemd[1]: Reloading finished in 340 ms. Feb 13 20:14:40.347601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:40.362030 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:14:40.362364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:40.367787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:40.513033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:14:40.524857 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:14:40.607070 kubelet[2487]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:40.608263 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:14:40.608263 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:14:40.608263 kubelet[2487]: I0213 20:14:40.607625 2487 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:14:40.616147 kubelet[2487]: I0213 20:14:40.616096 2487 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:14:40.616147 kubelet[2487]: I0213 20:14:40.616129 2487 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:14:40.616477 kubelet[2487]: I0213 20:14:40.616463 2487 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:14:40.621174 kubelet[2487]: I0213 20:14:40.620472 2487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:14:40.630224 kubelet[2487]: I0213 20:14:40.629873 2487 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:14:40.634776 kubelet[2487]: E0213 20:14:40.634728 2487 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:14:40.634776 kubelet[2487]: I0213 20:14:40.634771 2487 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:14:40.639720 kubelet[2487]: I0213 20:14:40.639637 2487 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:14:40.640383 kubelet[2487]: I0213 20:14:40.640107 2487 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:14:40.640532 kubelet[2487]: I0213 20:14:40.640159 2487 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-9-4d1da4e47c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:14:40.640697 kubelet[2487]: I0213 20:14:40.640677 2487 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642092 2487 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642173 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642387 2487 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642408 2487 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642429 2487 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:14:40.642704 kubelet[2487]: I0213 20:14:40.642441 2487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:14:40.646622 kubelet[2487]: I0213 20:14:40.645121 2487 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:14:40.646622 kubelet[2487]: I0213 20:14:40.645567 2487 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:14:40.646622 kubelet[2487]: I0213 20:14:40.646052 2487 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:14:40.646622 kubelet[2487]: I0213 20:14:40.646081 2487 server.go:1287] "Started kubelet" Feb 13 20:14:40.654461 kubelet[2487]: I0213 20:14:40.654427 2487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:14:40.671863 kubelet[2487]: I0213 20:14:40.654830 2487 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:14:40.674310 kubelet[2487]: I0213 20:14:40.673118 2487 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:14:40.674310 kubelet[2487]: I0213 20:14:40.657043 2487 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:14:40.674310 kubelet[2487]: I0213 20:14:40.674193 2487 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:14:40.674558 kubelet[2487]: E0213 20:14:40.674456 2487 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-9-4d1da4e47c\" not found" Feb 13 20:14:40.684089 kubelet[2487]: I0213 20:14:40.684049 2487 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:14:40.684454 kubelet[2487]: I0213 20:14:40.684424 2487 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:14:40.691034 kubelet[2487]: I0213 20:14:40.654920 2487 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:14:40.691034 kubelet[2487]: I0213 20:14:40.689301 2487 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:14:40.691034 kubelet[2487]: I0213 20:14:40.689571 2487 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:14:40.691034 kubelet[2487]: I0213 20:14:40.689684 2487 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:14:40.696010 kubelet[2487]: E0213 20:14:40.695978 2487 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:14:40.696546 kubelet[2487]: I0213 20:14:40.696526 2487 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:14:40.697465 kubelet[2487]: I0213 20:14:40.697415 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:14:40.701025 kubelet[2487]: I0213 20:14:40.700985 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:14:40.701172 kubelet[2487]: I0213 20:14:40.701043 2487 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:14:40.701172 kubelet[2487]: I0213 20:14:40.701075 2487 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:14:40.701172 kubelet[2487]: I0213 20:14:40.701085 2487 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:14:40.701172 kubelet[2487]: E0213 20:14:40.701160 2487 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:14:40.764894 kubelet[2487]: I0213 20:14:40.764811 2487 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:14:40.764894 kubelet[2487]: I0213 20:14:40.764836 2487 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:14:40.765145 kubelet[2487]: I0213 20:14:40.764924 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:14:40.765145 kubelet[2487]: I0213 20:14:40.765128 2487 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:14:40.765212 kubelet[2487]: I0213 20:14:40.765143 2487 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:14:40.765212 kubelet[2487]: I0213 20:14:40.765170 2487 policy_none.go:49] "None policy: Start" Feb 13 20:14:40.765212 kubelet[2487]: I0213 20:14:40.765182 2487 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:14:40.765212 kubelet[2487]: I0213 20:14:40.765193 2487 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:14:40.765407 kubelet[2487]: I0213 20:14:40.765387 2487 state_mem.go:75] "Updated machine memory state" Feb 13 20:14:40.770322 kubelet[2487]: I0213 20:14:40.770044 2487 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:14:40.770322 kubelet[2487]: I0213 20:14:40.770263 2487 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:14:40.770322 kubelet[2487]: I0213 20:14:40.770280 2487 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:14:40.770908 kubelet[2487]: I0213 20:14:40.770794 2487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:14:40.773803 kubelet[2487]: E0213 20:14:40.773767 2487 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:14:40.804559 kubelet[2487]: I0213 20:14:40.804501 2487 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.805345 kubelet[2487]: I0213 20:14:40.805037 2487 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.805469 kubelet[2487]: I0213 20:14:40.805452 2487 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.814422 kubelet[2487]: W0213 20:14:40.814380 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:40.816508 kubelet[2487]: W0213 20:14:40.816476 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:40.816508 kubelet[2487]: W0213 20:14:40.816458 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:40.872654 kubelet[2487]: I0213 20:14:40.871592 2487 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.883224 kubelet[2487]: I0213 20:14:40.883164 2487 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.883421 kubelet[2487]: I0213 20:14:40.883291 2487 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.890497 kubelet[2487]: I0213 20:14:40.890440 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.890497 kubelet[2487]: I0213 20:14:40.890490 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.890688 kubelet[2487]: I0213 20:14:40.890511 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/607ecc46cc3a7de981bf51d2389071f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-9-4d1da4e47c\" (UID: \"607ecc46cc3a7de981bf51d2389071f6\") " pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.890688 kubelet[2487]: I0213 20:14:40.890532 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.890688 kubelet[2487]: I0213 20:14:40.890552 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.991219 kubelet[2487]: I0213 20:14:40.990851 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c4cca2d8b8f2aef65c74724ef18d1fd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-9-4d1da4e47c\" (UID: \"9c4cca2d8b8f2aef65c74724ef18d1fd\") " pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.991219 kubelet[2487]: I0213 20:14:40.990935 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.991219 kubelet[2487]: I0213 20:14:40.990961 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:40.991219 kubelet[2487]: I0213 20:14:40.990986 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee17e95dc072d75e2c40a8b9c5bc9aed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-9-4d1da4e47c\" (UID: \"ee17e95dc072d75e2c40a8b9c5bc9aed\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:41.115871 kubelet[2487]: E0213 20:14:41.115770 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.118474 kubelet[2487]: E0213 20:14:41.117414 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.118474 kubelet[2487]: E0213 20:14:41.117601 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.644975 kubelet[2487]: I0213 20:14:41.644818 2487 apiserver.go:52] "Watching apiserver" Feb 13 20:14:41.690306 kubelet[2487]: I0213 20:14:41.690033 2487 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:14:41.731909 kubelet[2487]: E0213 20:14:41.731864 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.733301 kubelet[2487]: E0213 20:14:41.732218 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.733433 kubelet[2487]: I0213 20:14:41.733417 2487 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:41.788574 kubelet[2487]: W0213 20:14:41.788527 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:14:41.788744 kubelet[2487]: E0213 20:14:41.788602 2487 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-9-4d1da4e47c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" Feb 13 20:14:41.788781 kubelet[2487]: E0213 20:14:41.788770 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:41.908253 kubelet[2487]: I0213 20:14:41.907943 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-9-4d1da4e47c" podStartSLOduration=1.9079213560000001 podStartE2EDuration="1.907921356s" podCreationTimestamp="2025-02-13 20:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:41.862831536 +0000 UTC m=+1.331354923" watchObservedRunningTime="2025-02-13 20:14:41.907921356 +0000 UTC m=+1.376444736" Feb 13 20:14:41.947932 kubelet[2487]: I0213 20:14:41.947672 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-9-4d1da4e47c" podStartSLOduration=1.9476489369999999 podStartE2EDuration="1.947648937s" podCreationTimestamp="2025-02-13 20:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:41.909066113 +0000 UTC m=+1.377589500" watchObservedRunningTime="2025-02-13 20:14:41.947648937 +0000 UTC m=+1.416172315" Feb 13 20:14:41.979425 kubelet[2487]: I0213 20:14:41.978890 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-9-4d1da4e47c" podStartSLOduration=1.9788692 podStartE2EDuration="1.9788692s" podCreationTimestamp="2025-02-13 20:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:41.95261441 +0000 UTC m=+1.421137796" watchObservedRunningTime="2025-02-13 20:14:41.9788692 +0000 UTC m=+1.447392586" Feb 13 20:14:42.737321 kubelet[2487]: E0213 20:14:42.735157 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:42.739028 kubelet[2487]: E0213 20:14:42.738460 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:44.060030 kubelet[2487]: E0213 20:14:44.059984 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:44.978909 kubelet[2487]: E0213 20:14:44.978787 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:46.047957 kubelet[2487]: I0213 20:14:46.047916 2487 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:14:46.048474 containerd[1459]: time="2025-02-13T20:14:46.048288618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:14:46.048783 kubelet[2487]: I0213 20:14:46.048763 2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:14:46.259581 sudo[1652]: pam_unix(sudo:session): session closed for user root Feb 13 20:14:46.267710 sshd[1649]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:46.272009 systemd[1]: sshd@6-146.190.40.231:22-147.75.109.163:47638.service: Deactivated successfully. Feb 13 20:14:46.274803 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:14:46.275109 systemd[1]: session-7.scope: Consumed 5.648s CPU time, 145.2M memory peak, 0B memory swap peak. Feb 13 20:14:46.278210 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:14:46.280848 systemd-logind[1449]: Removed session 7. Feb 13 20:14:47.165989 kubelet[2487]: E0213 20:14:47.165950 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:47.234342 systemd[1]: Created slice kubepods-besteffort-pode997bf3a_53cd_4ec3_9b98_f23c186c400e.slice - libcontainer container kubepods-besteffort-pode997bf3a_53cd_4ec3_9b98_f23c186c400e.slice. Feb 13 20:14:47.331666 kubelet[2487]: I0213 20:14:47.330647 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e997bf3a-53cd-4ec3-9b98-f23c186c400e-kube-proxy\") pod \"kube-proxy-cmvqr\" (UID: \"e997bf3a-53cd-4ec3-9b98-f23c186c400e\") " pod="kube-system/kube-proxy-cmvqr" Feb 13 20:14:47.331666 kubelet[2487]: I0213 20:14:47.330699 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e997bf3a-53cd-4ec3-9b98-f23c186c400e-xtables-lock\") pod \"kube-proxy-cmvqr\" (UID: \"e997bf3a-53cd-4ec3-9b98-f23c186c400e\") " pod="kube-system/kube-proxy-cmvqr" Feb 13 20:14:47.331666 kubelet[2487]: I0213 20:14:47.330719 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e997bf3a-53cd-4ec3-9b98-f23c186c400e-lib-modules\") pod \"kube-proxy-cmvqr\" (UID: \"e997bf3a-53cd-4ec3-9b98-f23c186c400e\") " pod="kube-system/kube-proxy-cmvqr" Feb 13 20:14:47.331666 kubelet[2487]: I0213 20:14:47.330738 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcktg\" (UniqueName: \"kubernetes.io/projected/e997bf3a-53cd-4ec3-9b98-f23c186c400e-kube-api-access-qcktg\") pod \"kube-proxy-cmvqr\" (UID: \"e997bf3a-53cd-4ec3-9b98-f23c186c400e\") " pod="kube-system/kube-proxy-cmvqr" Feb 13 20:14:47.331525 systemd[1]: Created slice kubepods-besteffort-pod9588f959_c26c_483b_9431_64d9dfcf49d0.slice - libcontainer container kubepods-besteffort-pod9588f959_c26c_483b_9431_64d9dfcf49d0.slice. Feb 13 20:14:47.431847 kubelet[2487]: I0213 20:14:47.431783 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9588f959-c26c-483b-9431-64d9dfcf49d0-var-lib-calico\") pod \"tigera-operator-7d68577dc5-7bkb2\" (UID: \"9588f959-c26c-483b-9431-64d9dfcf49d0\") " pod="tigera-operator/tigera-operator-7d68577dc5-7bkb2" Feb 13 20:14:47.432581 kubelet[2487]: I0213 20:14:47.431955 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gts7\" (UniqueName: \"kubernetes.io/projected/9588f959-c26c-483b-9431-64d9dfcf49d0-kube-api-access-5gts7\") pod \"tigera-operator-7d68577dc5-7bkb2\" (UID: \"9588f959-c26c-483b-9431-64d9dfcf49d0\") " pod="tigera-operator/tigera-operator-7d68577dc5-7bkb2" Feb 13 20:14:47.546295 kubelet[2487]: E0213 20:14:47.546255 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:47.547687 containerd[1459]: time="2025-02-13T20:14:47.547045767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmvqr,Uid:e997bf3a-53cd-4ec3-9b98-f23c186c400e,Namespace:kube-system,Attempt:0,}" Feb 13 20:14:47.571586 containerd[1459]: time="2025-02-13T20:14:47.571261315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:47.571586 containerd[1459]: time="2025-02-13T20:14:47.571332315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:47.571586 containerd[1459]: time="2025-02-13T20:14:47.571347993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:47.571586 containerd[1459]: time="2025-02-13T20:14:47.571471523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:47.595515 systemd[1]: Started cri-containerd-bfecdf71a19fa4b234e6dd1cbbad7b279bffa42c5b232ce0a07f36389620bda0.scope - libcontainer container bfecdf71a19fa4b234e6dd1cbbad7b279bffa42c5b232ce0a07f36389620bda0. Feb 13 20:14:47.632016 containerd[1459]: time="2025-02-13T20:14:47.631901692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmvqr,Uid:e997bf3a-53cd-4ec3-9b98-f23c186c400e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfecdf71a19fa4b234e6dd1cbbad7b279bffa42c5b232ce0a07f36389620bda0\"" Feb 13 20:14:47.633167 kubelet[2487]: E0213 20:14:47.633144 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:47.637536 containerd[1459]: time="2025-02-13T20:14:47.637203120Z" level=info msg="CreateContainer within sandbox \"bfecdf71a19fa4b234e6dd1cbbad7b279bffa42c5b232ce0a07f36389620bda0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:14:47.637702 containerd[1459]: time="2025-02-13T20:14:47.637657779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-7bkb2,Uid:9588f959-c26c-483b-9431-64d9dfcf49d0,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:14:47.655744 containerd[1459]: time="2025-02-13T20:14:47.655609242Z" level=info msg="CreateContainer within sandbox \"bfecdf71a19fa4b234e6dd1cbbad7b279bffa42c5b232ce0a07f36389620bda0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79f4eb7d07b17b148f34745a5146bd7d0eab9ae44e4afedbdf3f1756cfa6b359\"" Feb 13 20:14:47.658147 containerd[1459]: time="2025-02-13T20:14:47.656659471Z" level=info msg="StartContainer for \"79f4eb7d07b17b148f34745a5146bd7d0eab9ae44e4afedbdf3f1756cfa6b359\"" Feb 13 20:14:47.674779 containerd[1459]: time="2025-02-13T20:14:47.674679278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:47.677528 containerd[1459]: time="2025-02-13T20:14:47.677300652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:47.677528 containerd[1459]: time="2025-02-13T20:14:47.677342924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:47.677528 containerd[1459]: time="2025-02-13T20:14:47.677447521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:47.692817 systemd[1]: Started cri-containerd-79f4eb7d07b17b148f34745a5146bd7d0eab9ae44e4afedbdf3f1756cfa6b359.scope - libcontainer container 79f4eb7d07b17b148f34745a5146bd7d0eab9ae44e4afedbdf3f1756cfa6b359. Feb 13 20:14:47.704462 systemd[1]: Started cri-containerd-982257427892cc5ce600b84d5fdd69e6860de5b1a6967a2df568d1c48af08a91.scope - libcontainer container 982257427892cc5ce600b84d5fdd69e6860de5b1a6967a2df568d1c48af08a91. Feb 13 20:14:47.745442 containerd[1459]: time="2025-02-13T20:14:47.745106571Z" level=info msg="StartContainer for \"79f4eb7d07b17b148f34745a5146bd7d0eab9ae44e4afedbdf3f1756cfa6b359\" returns successfully" Feb 13 20:14:47.749327 kubelet[2487]: E0213 20:14:47.749091 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:47.775937 containerd[1459]: time="2025-02-13T20:14:47.775867237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-7bkb2,Uid:9588f959-c26c-483b-9431-64d9dfcf49d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"982257427892cc5ce600b84d5fdd69e6860de5b1a6967a2df568d1c48af08a91\"" Feb 13 20:14:47.778887 containerd[1459]: time="2025-02-13T20:14:47.778572523Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:14:48.753156 kubelet[2487]: E0213 20:14:48.753060 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:48.765865 kubelet[2487]: I0213 20:14:48.765696 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cmvqr" podStartSLOduration=1.765672072 podStartE2EDuration="1.765672072s" podCreationTimestamp="2025-02-13 20:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:14:48.764457021 +0000 UTC m=+8.232980424" watchObservedRunningTime="2025-02-13 20:14:48.765672072 +0000 UTC m=+8.234195457" Feb 13 20:14:49.193305 systemd-timesyncd[1338]: Contacted time server 66.42.71.197:123 (2.flatcar.pool.ntp.org). Feb 13 20:14:49.193391 systemd-timesyncd[1338]: Initial clock synchronization to Thu 2025-02-13 20:14:49.349669 UTC. Feb 13 20:14:49.206289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692390065.mount: Deactivated successfully. Feb 13 20:14:49.691655 containerd[1459]: time="2025-02-13T20:14:49.691603428Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:49.692728 containerd[1459]: time="2025-02-13T20:14:49.692318617Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:14:49.693140 containerd[1459]: time="2025-02-13T20:14:49.693112947Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:49.695434 containerd[1459]: time="2025-02-13T20:14:49.695397850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:49.696666 containerd[1459]: time="2025-02-13T20:14:49.696599691Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.917985477s" Feb 13 20:14:49.696666 containerd[1459]: time="2025-02-13T20:14:49.696651286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:14:49.701155 containerd[1459]: time="2025-02-13T20:14:49.701114327Z" level=info msg="CreateContainer within sandbox \"982257427892cc5ce600b84d5fdd69e6860de5b1a6967a2df568d1c48af08a91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:14:49.718914 containerd[1459]: time="2025-02-13T20:14:49.718867213Z" level=info msg="CreateContainer within sandbox \"982257427892cc5ce600b84d5fdd69e6860de5b1a6967a2df568d1c48af08a91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5\"" Feb 13 20:14:49.721011 containerd[1459]: time="2025-02-13T20:14:49.719883983Z" level=info msg="StartContainer for \"bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5\"" Feb 13 20:14:49.760657 systemd[1]: run-containerd-runc-k8s.io-bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5-runc.Wc4BrL.mount: Deactivated successfully. Feb 13 20:14:49.770521 systemd[1]: Started cri-containerd-bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5.scope - libcontainer container bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5. Feb 13 20:14:49.799832 containerd[1459]: time="2025-02-13T20:14:49.799789547Z" level=info msg="StartContainer for \"bb2231741999051d3c832284595fa437a2c2f941630a51f27fb383433fee42c5\" returns successfully" Feb 13 20:14:53.082344 kubelet[2487]: I0213 20:14:53.082262 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-7bkb2" podStartSLOduration=4.160887564 podStartE2EDuration="6.080668438s" podCreationTimestamp="2025-02-13 20:14:47 +0000 UTC" firstStartedPulling="2025-02-13 20:14:47.777926481 +0000 UTC m=+7.246449849" lastFinishedPulling="2025-02-13 20:14:49.697707354 +0000 UTC m=+9.166230723" observedRunningTime="2025-02-13 20:14:50.781819724 +0000 UTC m=+10.250343123" watchObservedRunningTime="2025-02-13 20:14:53.080668438 +0000 UTC m=+12.549191825" Feb 13 20:14:53.125742 systemd[1]: Created slice kubepods-besteffort-podfec2fe04_4cd5_4862_b5f4_ce43ff9cc56f.slice - libcontainer container kubepods-besteffort-podfec2fe04_4cd5_4862_b5f4_ce43ff9cc56f.slice. Feb 13 20:14:53.176369 kubelet[2487]: I0213 20:14:53.176180 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f-typha-certs\") pod \"calico-typha-95b5574f7-wnwfc\" (UID: \"fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f\") " pod="calico-system/calico-typha-95b5574f7-wnwfc" Feb 13 20:14:53.176369 kubelet[2487]: I0213 20:14:53.176253 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcp2\" (UniqueName: \"kubernetes.io/projected/fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f-kube-api-access-pxcp2\") pod \"calico-typha-95b5574f7-wnwfc\" (UID: \"fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f\") " pod="calico-system/calico-typha-95b5574f7-wnwfc" Feb 13 20:14:53.176369 kubelet[2487]: I0213 20:14:53.176294 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f-tigera-ca-bundle\") pod \"calico-typha-95b5574f7-wnwfc\" (UID: \"fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f\") " pod="calico-system/calico-typha-95b5574f7-wnwfc" Feb 13 20:14:53.235717 systemd[1]: Created slice kubepods-besteffort-podc5c2d4fe_784b_4a61_a57b_7cecd547a6a5.slice - libcontainer container kubepods-besteffort-podc5c2d4fe_784b_4a61_a57b_7cecd547a6a5.slice. Feb 13 20:14:53.277599 kubelet[2487]: I0213 20:14:53.277056 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-lib-modules\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.277599 kubelet[2487]: I0213 20:14:53.277124 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhrw\" (UniqueName: \"kubernetes.io/projected/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-kube-api-access-jnhrw\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.277599 kubelet[2487]: I0213 20:14:53.277146 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-node-certs\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.277599 kubelet[2487]: I0213 20:14:53.277163 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-xtables-lock\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.277599 kubelet[2487]: I0213 20:14:53.277179 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-cni-log-dir\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279017 kubelet[2487]: I0213 20:14:53.277210 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-policysync\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279017 kubelet[2487]: I0213 20:14:53.277224 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-cni-bin-dir\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279017 kubelet[2487]: I0213 20:14:53.277266 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-flexvol-driver-host\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279017 kubelet[2487]: I0213 20:14:53.277312 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-var-lib-calico\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279017 kubelet[2487]: I0213 20:14:53.277355 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-tigera-ca-bundle\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279270 kubelet[2487]: I0213 20:14:53.277379 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-var-run-calico\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.279270 kubelet[2487]: I0213 20:14:53.277403 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c5c2d4fe-784b-4a61-a57b-7cecd547a6a5-cni-net-dir\") pod \"calico-node-gg2w5\" (UID: \"c5c2d4fe-784b-4a61-a57b-7cecd547a6a5\") " pod="calico-system/calico-node-gg2w5" Feb 13 20:14:53.368043 kubelet[2487]: E0213 20:14:53.367871 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:14:53.386641 kubelet[2487]: E0213 20:14:53.386419 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.386641 kubelet[2487]: W0213 20:14:53.386471 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.386641 kubelet[2487]: E0213 20:14:53.386500 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.391407 kubelet[2487]: E0213 20:14:53.391368 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.392730 kubelet[2487]: W0213 20:14:53.391583 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.392730 kubelet[2487]: E0213 20:14:53.391630 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.393179 kubelet[2487]: E0213 20:14:53.393004 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.393179 kubelet[2487]: W0213 20:14:53.393029 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.393956 kubelet[2487]: E0213 20:14:53.393770 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.394455 kubelet[2487]: E0213 20:14:53.394348 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.394455 kubelet[2487]: W0213 20:14:53.394367 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.394694 kubelet[2487]: E0213 20:14:53.394611 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.395024 kubelet[2487]: E0213 20:14:53.394943 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.395024 kubelet[2487]: W0213 20:14:53.394958 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.395273 kubelet[2487]: E0213 20:14:53.395156 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.395671 kubelet[2487]: E0213 20:14:53.395492 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.395671 kubelet[2487]: W0213 20:14:53.395506 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.395671 kubelet[2487]: E0213 20:14:53.395525 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.399200 kubelet[2487]: E0213 20:14:53.399008 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.399200 kubelet[2487]: W0213 20:14:53.399027 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.399434 kubelet[2487]: E0213 20:14:53.399415 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.399990 kubelet[2487]: E0213 20:14:53.399772 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.399990 kubelet[2487]: W0213 20:14:53.399789 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.399990 kubelet[2487]: E0213 20:14:53.399809 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.402016 kubelet[2487]: E0213 20:14:53.401932 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.402016 kubelet[2487]: W0213 20:14:53.401952 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.402016 kubelet[2487]: E0213 20:14:53.401973 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.432267 kubelet[2487]: E0213 20:14:53.431507 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.433278 kubelet[2487]: W0213 20:14:53.432761 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.433278 kubelet[2487]: E0213 20:14:53.432849 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.445063 kubelet[2487]: E0213 20:14:53.443362 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:53.451645 containerd[1459]: time="2025-02-13T20:14:53.450613908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95b5574f7-wnwfc,Uid:fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:53.455482 kubelet[2487]: E0213 20:14:53.455353 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.456659 kubelet[2487]: W0213 20:14:53.456024 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.456659 kubelet[2487]: E0213 20:14:53.456214 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.457405 kubelet[2487]: E0213 20:14:53.457269 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.458034 kubelet[2487]: W0213 20:14:53.457683 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.458034 kubelet[2487]: E0213 20:14:53.457818 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.459163 kubelet[2487]: E0213 20:14:53.459055 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.460136 kubelet[2487]: W0213 20:14:53.459323 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.460136 kubelet[2487]: E0213 20:14:53.459357 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.460630 kubelet[2487]: E0213 20:14:53.460350 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.460630 kubelet[2487]: W0213 20:14:53.460371 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.460630 kubelet[2487]: E0213 20:14:53.460401 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.461020 kubelet[2487]: E0213 20:14:53.460982 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.462508 kubelet[2487]: W0213 20:14:53.462290 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.462508 kubelet[2487]: E0213 20:14:53.462330 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.462860 kubelet[2487]: E0213 20:14:53.462841 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.463110 kubelet[2487]: W0213 20:14:53.462952 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.463110 kubelet[2487]: E0213 20:14:53.462983 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.463822 kubelet[2487]: E0213 20:14:53.463666 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.463822 kubelet[2487]: W0213 20:14:53.463684 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.463822 kubelet[2487]: E0213 20:14:53.463702 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.464535 kubelet[2487]: E0213 20:14:53.464361 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.464535 kubelet[2487]: W0213 20:14:53.464378 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.464535 kubelet[2487]: E0213 20:14:53.464396 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.464797 kubelet[2487]: E0213 20:14:53.464780 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.465027 kubelet[2487]: W0213 20:14:53.465005 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.465126 kubelet[2487]: E0213 20:14:53.465113 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.465719 kubelet[2487]: E0213 20:14:53.465703 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.465853 kubelet[2487]: W0213 20:14:53.465821 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.466130 kubelet[2487]: E0213 20:14:53.465964 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.467657 kubelet[2487]: E0213 20:14:53.467484 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.467657 kubelet[2487]: W0213 20:14:53.467508 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.467657 kubelet[2487]: E0213 20:14:53.467528 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.468235 kubelet[2487]: E0213 20:14:53.468064 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.468235 kubelet[2487]: W0213 20:14:53.468083 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.468235 kubelet[2487]: E0213 20:14:53.468100 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.470502 kubelet[2487]: E0213 20:14:53.470343 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.470502 kubelet[2487]: W0213 20:14:53.470365 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.470502 kubelet[2487]: E0213 20:14:53.470386 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.473754 kubelet[2487]: E0213 20:14:53.473569 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.473754 kubelet[2487]: W0213 20:14:53.473595 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.473754 kubelet[2487]: E0213 20:14:53.473619 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.476094 kubelet[2487]: E0213 20:14:53.475983 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.477027 kubelet[2487]: W0213 20:14:53.476260 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.477027 kubelet[2487]: E0213 20:14:53.476296 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.479577 kubelet[2487]: E0213 20:14:53.479292 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.479577 kubelet[2487]: W0213 20:14:53.479345 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.479577 kubelet[2487]: E0213 20:14:53.479375 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.481036 kubelet[2487]: E0213 20:14:53.480618 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.481036 kubelet[2487]: W0213 20:14:53.480645 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.481036 kubelet[2487]: E0213 20:14:53.480672 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.482023 kubelet[2487]: E0213 20:14:53.481733 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.482023 kubelet[2487]: W0213 20:14:53.481755 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.482023 kubelet[2487]: E0213 20:14:53.481779 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.482888 kubelet[2487]: E0213 20:14:53.482698 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.482888 kubelet[2487]: W0213 20:14:53.482717 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.482888 kubelet[2487]: E0213 20:14:53.482736 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.484358 kubelet[2487]: E0213 20:14:53.483711 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.484358 kubelet[2487]: W0213 20:14:53.483731 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.484358 kubelet[2487]: E0213 20:14:53.483748 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.485143 kubelet[2487]: E0213 20:14:53.484953 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.485143 kubelet[2487]: W0213 20:14:53.484972 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.485143 kubelet[2487]: E0213 20:14:53.484989 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.485143 kubelet[2487]: I0213 20:14:53.485032 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c75929e-d081-42af-bef0-99987551ea46-varrun\") pod \"csi-node-driver-mvlgz\" (UID: \"3c75929e-d081-42af-bef0-99987551ea46\") " pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:14:53.485988 kubelet[2487]: E0213 20:14:53.485949 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.486157 kubelet[2487]: W0213 20:14:53.485968 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.486157 kubelet[2487]: E0213 20:14:53.486100 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.486508 kubelet[2487]: I0213 20:14:53.486304 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c75929e-d081-42af-bef0-99987551ea46-socket-dir\") pod \"csi-node-driver-mvlgz\" (UID: \"3c75929e-d081-42af-bef0-99987551ea46\") " pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:14:53.487088 kubelet[2487]: E0213 20:14:53.487045 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.487377 kubelet[2487]: W0213 20:14:53.487182 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.487579 kubelet[2487]: E0213 20:14:53.487293 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.488165 kubelet[2487]: E0213 20:14:53.487996 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.488165 kubelet[2487]: W0213 20:14:53.488017 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.488165 kubelet[2487]: E0213 20:14:53.488043 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.488586 kubelet[2487]: I0213 20:14:53.488299 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtmm5\" (UniqueName: \"kubernetes.io/projected/3c75929e-d081-42af-bef0-99987551ea46-kube-api-access-mtmm5\") pod \"csi-node-driver-mvlgz\" (UID: \"3c75929e-d081-42af-bef0-99987551ea46\") " pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:14:53.489412 kubelet[2487]: E0213 20:14:53.489048 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.489412 kubelet[2487]: W0213 20:14:53.489098 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.489412 kubelet[2487]: E0213 20:14:53.489123 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.489841 kubelet[2487]: E0213 20:14:53.489687 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.489841 kubelet[2487]: W0213 20:14:53.489702 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.489841 kubelet[2487]: E0213 20:14:53.489738 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.490672 kubelet[2487]: E0213 20:14:53.490473 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.490672 kubelet[2487]: W0213 20:14:53.490493 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.490672 kubelet[2487]: E0213 20:14:53.490529 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.491272 kubelet[2487]: E0213 20:14:53.491094 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.491272 kubelet[2487]: W0213 20:14:53.491114 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.491272 kubelet[2487]: E0213 20:14:53.491129 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.491790 kubelet[2487]: E0213 20:14:53.491767 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.492057 kubelet[2487]: W0213 20:14:53.491886 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.492057 kubelet[2487]: E0213 20:14:53.491910 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.492629 kubelet[2487]: E0213 20:14:53.492496 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.492629 kubelet[2487]: W0213 20:14:53.492511 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.492629 kubelet[2487]: E0213 20:14:53.492527 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.492629 kubelet[2487]: I0213 20:14:53.492578 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c75929e-d081-42af-bef0-99987551ea46-kubelet-dir\") pod \"csi-node-driver-mvlgz\" (UID: \"3c75929e-d081-42af-bef0-99987551ea46\") " pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:14:53.493376 kubelet[2487]: E0213 20:14:53.493167 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.493376 kubelet[2487]: W0213 20:14:53.493185 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.493643 kubelet[2487]: E0213 20:14:53.493213 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.493643 kubelet[2487]: I0213 20:14:53.493557 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c75929e-d081-42af-bef0-99987551ea46-registration-dir\") pod \"csi-node-driver-mvlgz\" (UID: \"3c75929e-d081-42af-bef0-99987551ea46\") " pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:14:53.494130 kubelet[2487]: E0213 20:14:53.494014 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.494130 kubelet[2487]: W0213 20:14:53.494053 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.494130 kubelet[2487]: E0213 20:14:53.494085 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.494981 kubelet[2487]: E0213 20:14:53.494779 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.494981 kubelet[2487]: W0213 20:14:53.494797 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.494981 kubelet[2487]: E0213 20:14:53.494871 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.495632 kubelet[2487]: E0213 20:14:53.495488 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.495632 kubelet[2487]: W0213 20:14:53.495505 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.495632 kubelet[2487]: E0213 20:14:53.495520 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.496037 kubelet[2487]: E0213 20:14:53.495978 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.496037 kubelet[2487]: W0213 20:14:53.495993 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.496037 kubelet[2487]: E0213 20:14:53.496009 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.524737 containerd[1459]: time="2025-02-13T20:14:53.524564988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:53.524737 containerd[1459]: time="2025-02-13T20:14:53.524681266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:53.524737 containerd[1459]: time="2025-02-13T20:14:53.524703691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:53.525045 containerd[1459]: time="2025-02-13T20:14:53.524853713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:53.551380 kubelet[2487]: E0213 20:14:53.549831 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:53.551575 containerd[1459]: time="2025-02-13T20:14:53.550787649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gg2w5,Uid:c5c2d4fe-784b-4a61-a57b-7cecd547a6a5,Namespace:calico-system,Attempt:0,}" Feb 13 20:14:53.565534 systemd[1]: Started cri-containerd-bb95f91b8c295e4148eb091136db5e9994b276f32944635b26e5590b107517e3.scope - libcontainer container bb95f91b8c295e4148eb091136db5e9994b276f32944635b26e5590b107517e3. Feb 13 20:14:53.595017 kubelet[2487]: E0213 20:14:53.594949 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.595017 kubelet[2487]: W0213 20:14:53.594976 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.596758 kubelet[2487]: E0213 20:14:53.595311 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.596758 kubelet[2487]: E0213 20:14:53.596444 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.596758 kubelet[2487]: W0213 20:14:53.596462 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.596758 kubelet[2487]: E0213 20:14:53.596488 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.597401 kubelet[2487]: E0213 20:14:53.597249 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.597401 kubelet[2487]: W0213 20:14:53.597281 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.597401 kubelet[2487]: E0213 20:14:53.597303 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.598598 kubelet[2487]: E0213 20:14:53.598578 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.598729 kubelet[2487]: W0213 20:14:53.598706 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.599194 kubelet[2487]: E0213 20:14:53.599101 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.599524 kubelet[2487]: E0213 20:14:53.599315 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.599524 kubelet[2487]: W0213 20:14:53.599327 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.599524 kubelet[2487]: E0213 20:14:53.599341 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.600327 kubelet[2487]: E0213 20:14:53.599981 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.600327 kubelet[2487]: W0213 20:14:53.599995 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.600327 kubelet[2487]: E0213 20:14:53.600014 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.601906 kubelet[2487]: E0213 20:14:53.601619 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.601906 kubelet[2487]: W0213 20:14:53.601637 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.601906 kubelet[2487]: E0213 20:14:53.601692 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.602523 kubelet[2487]: E0213 20:14:53.602319 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.602523 kubelet[2487]: W0213 20:14:53.602339 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.602523 kubelet[2487]: E0213 20:14:53.602398 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.603038 kubelet[2487]: E0213 20:14:53.602882 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.603038 kubelet[2487]: W0213 20:14:53.602896 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.603616 kubelet[2487]: E0213 20:14:53.603465 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.603616 kubelet[2487]: W0213 20:14:53.603478 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.603616 kubelet[2487]: E0213 20:14:53.603553 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.603729 kubelet[2487]: E0213 20:14:53.603641 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.605810 kubelet[2487]: E0213 20:14:53.605204 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.605810 kubelet[2487]: W0213 20:14:53.605221 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.605810 kubelet[2487]: E0213 20:14:53.605430 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.605810 kubelet[2487]: W0213 20:14:53.605438 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.605810 kubelet[2487]: E0213 20:14:53.605584 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.605810 kubelet[2487]: W0213 20:14:53.605590 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.606202 kubelet[2487]: E0213 20:14:53.606080 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.606202 kubelet[2487]: W0213 20:14:53.606093 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.606202 kubelet[2487]: E0213 20:14:53.606120 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.608464 kubelet[2487]: E0213 20:14:53.607325 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.608464 kubelet[2487]: W0213 20:14:53.607346 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.608464 kubelet[2487]: E0213 20:14:53.607361 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.608464 kubelet[2487]: E0213 20:14:53.608410 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.608464 kubelet[2487]: W0213 20:14:53.608423 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.608464 kubelet[2487]: E0213 20:14:53.608436 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.615380 kubelet[2487]: E0213 20:14:53.614863 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.616099 kubelet[2487]: E0213 20:14:53.615901 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.616099 kubelet[2487]: W0213 20:14:53.615923 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.616099 kubelet[2487]: E0213 20:14:53.615945 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.620127 containerd[1459]: time="2025-02-13T20:14:53.619914970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:14:53.620127 containerd[1459]: time="2025-02-13T20:14:53.620003515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:14:53.620127 containerd[1459]: time="2025-02-13T20:14:53.620030976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:53.621823 containerd[1459]: time="2025-02-13T20:14:53.621727482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:14:53.631372 kubelet[2487]: E0213 20:14:53.631177 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.631372 kubelet[2487]: W0213 20:14:53.631208 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.631372 kubelet[2487]: E0213 20:14:53.631233 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.633524 kubelet[2487]: E0213 20:14:53.632508 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.633524 kubelet[2487]: W0213 20:14:53.632529 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.633524 kubelet[2487]: E0213 20:14:53.632551 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.633524 kubelet[2487]: E0213 20:14:53.632587 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.633922 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.636314 kubelet[2487]: W0213 20:14:53.633944 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.633965 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.634193 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.636314 kubelet[2487]: W0213 20:14:53.634202 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.634213 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.634689 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.636314 kubelet[2487]: W0213 20:14:53.634702 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.634717 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.636314 kubelet[2487]: E0213 20:14:53.634754 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.635456 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.638846 kubelet[2487]: W0213 20:14:53.635469 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.635483 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.635694 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.638846 kubelet[2487]: W0213 20:14:53.635705 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.635717 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.636185 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.638846 kubelet[2487]: W0213 20:14:53.636200 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.638846 kubelet[2487]: E0213 20:14:53.636212 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.683452 kubelet[2487]: E0213 20:14:53.683401 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:53.683452 kubelet[2487]: W0213 20:14:53.683436 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:53.683660 kubelet[2487]: E0213 20:14:53.683468 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:53.689532 systemd[1]: Started cri-containerd-6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a.scope - libcontainer container 6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a. Feb 13 20:14:53.729697 containerd[1459]: time="2025-02-13T20:14:53.729635957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-95b5574f7-wnwfc,Uid:fec2fe04-4cd5-4862-b5f4-ce43ff9cc56f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb95f91b8c295e4148eb091136db5e9994b276f32944635b26e5590b107517e3\"" Feb 13 20:14:53.732685 kubelet[2487]: E0213 20:14:53.732397 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:53.736546 containerd[1459]: time="2025-02-13T20:14:53.736220637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:14:53.782820 containerd[1459]: time="2025-02-13T20:14:53.782457070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gg2w5,Uid:c5c2d4fe-784b-4a61-a57b-7cecd547a6a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\"" Feb 13 20:14:53.786125 kubelet[2487]: E0213 20:14:53.785248 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:54.068212 kubelet[2487]: E0213 20:14:54.068052 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:54.091572 kubelet[2487]: E0213 20:14:54.091535 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.092165 kubelet[2487]: W0213 20:14:54.091877 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.092165 kubelet[2487]: E0213 20:14:54.091919 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.093336 kubelet[2487]: E0213 20:14:54.092697 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.093336 kubelet[2487]: W0213 20:14:54.092732 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.093336 kubelet[2487]: E0213 20:14:54.092750 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.093336 kubelet[2487]: E0213 20:14:54.093030 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.093336 kubelet[2487]: W0213 20:14:54.093041 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.093336 kubelet[2487]: E0213 20:14:54.093072 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.093650 kubelet[2487]: E0213 20:14:54.093486 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.093650 kubelet[2487]: W0213 20:14:54.093501 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.093650 kubelet[2487]: E0213 20:14:54.093513 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.093844 kubelet[2487]: E0213 20:14:54.093821 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.093844 kubelet[2487]: W0213 20:14:54.093835 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.093900 kubelet[2487]: E0213 20:14:54.093846 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.094367 kubelet[2487]: E0213 20:14:54.094327 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.094367 kubelet[2487]: W0213 20:14:54.094344 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.094367 kubelet[2487]: E0213 20:14:54.094356 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.094618 kubelet[2487]: E0213 20:14:54.094604 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.094618 kubelet[2487]: W0213 20:14:54.094616 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.094730 kubelet[2487]: E0213 20:14:54.094626 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.094955 kubelet[2487]: E0213 20:14:54.094939 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.094955 kubelet[2487]: W0213 20:14:54.094951 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.095087 kubelet[2487]: E0213 20:14:54.094961 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.095552 kubelet[2487]: E0213 20:14:54.095534 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.095552 kubelet[2487]: W0213 20:14:54.095548 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.095750 kubelet[2487]: E0213 20:14:54.095559 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.095971 kubelet[2487]: E0213 20:14:54.095952 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.096031 kubelet[2487]: W0213 20:14:54.095989 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.096031 kubelet[2487]: E0213 20:14:54.096002 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.096901 kubelet[2487]: E0213 20:14:54.096686 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.096901 kubelet[2487]: W0213 20:14:54.096703 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.096901 kubelet[2487]: E0213 20:14:54.096715 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.098125 kubelet[2487]: E0213 20:14:54.098093 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.098125 kubelet[2487]: W0213 20:14:54.098114 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.098229 kubelet[2487]: E0213 20:14:54.098163 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.098815 kubelet[2487]: E0213 20:14:54.098793 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.098926 kubelet[2487]: W0213 20:14:54.098812 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.098966 kubelet[2487]: E0213 20:14:54.098932 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.099367 kubelet[2487]: E0213 20:14:54.099346 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.099367 kubelet[2487]: W0213 20:14:54.099362 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.099479 kubelet[2487]: E0213 20:14:54.099374 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.100467 kubelet[2487]: E0213 20:14:54.100443 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.100467 kubelet[2487]: W0213 20:14:54.100464 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.100562 kubelet[2487]: E0213 20:14:54.100480 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.703636 kubelet[2487]: E0213 20:14:54.702503 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:14:54.786543 kubelet[2487]: E0213 20:14:54.786125 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:54.806575 kubelet[2487]: E0213 20:14:54.806524 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.806575 kubelet[2487]: W0213 20:14:54.806565 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.806773 kubelet[2487]: E0213 20:14:54.806660 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.807153 kubelet[2487]: E0213 20:14:54.807132 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.807207 kubelet[2487]: W0213 20:14:54.807159 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.807207 kubelet[2487]: E0213 20:14:54.807179 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.807524 kubelet[2487]: E0213 20:14:54.807509 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.807565 kubelet[2487]: W0213 20:14:54.807526 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.807565 kubelet[2487]: E0213 20:14:54.807542 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.807829 kubelet[2487]: E0213 20:14:54.807812 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.807870 kubelet[2487]: W0213 20:14:54.807830 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.807870 kubelet[2487]: E0213 20:14:54.807844 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.808160 kubelet[2487]: E0213 20:14:54.808144 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.808228 kubelet[2487]: W0213 20:14:54.808168 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.808228 kubelet[2487]: E0213 20:14:54.808183 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.808512 kubelet[2487]: E0213 20:14:54.808498 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.808577 kubelet[2487]: W0213 20:14:54.808514 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.808577 kubelet[2487]: E0213 20:14:54.808529 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.808867 kubelet[2487]: E0213 20:14:54.808851 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.808913 kubelet[2487]: W0213 20:14:54.808868 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.808913 kubelet[2487]: E0213 20:14:54.808901 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.809215 kubelet[2487]: E0213 20:14:54.809200 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.809215 kubelet[2487]: W0213 20:14:54.809215 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.809301 kubelet[2487]: E0213 20:14:54.809228 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.809535 kubelet[2487]: E0213 20:14:54.809521 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.809584 kubelet[2487]: W0213 20:14:54.809545 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.809584 kubelet[2487]: E0213 20:14:54.809563 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.809936 kubelet[2487]: E0213 20:14:54.809897 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.809984 kubelet[2487]: W0213 20:14:54.809944 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.809984 kubelet[2487]: E0213 20:14:54.809959 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.810512 kubelet[2487]: E0213 20:14:54.810310 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.810606 kubelet[2487]: W0213 20:14:54.810521 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.810606 kubelet[2487]: E0213 20:14:54.810564 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.810870 kubelet[2487]: E0213 20:14:54.810857 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.810905 kubelet[2487]: W0213 20:14:54.810870 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.810905 kubelet[2487]: E0213 20:14:54.810883 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.811197 kubelet[2487]: E0213 20:14:54.811183 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.811273 kubelet[2487]: W0213 20:14:54.811197 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.811273 kubelet[2487]: E0213 20:14:54.811210 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.811626 kubelet[2487]: E0213 20:14:54.811605 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.811626 kubelet[2487]: W0213 20:14:54.811623 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.811700 kubelet[2487]: E0213 20:14:54.811652 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.812002 kubelet[2487]: E0213 20:14:54.811974 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:54.812002 kubelet[2487]: W0213 20:14:54.811997 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:54.812115 kubelet[2487]: E0213 20:14:54.812012 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:54.985452 kubelet[2487]: E0213 20:14:54.984940 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:55.015118 kubelet[2487]: E0213 20:14:55.015050 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.015118 kubelet[2487]: W0213 20:14:55.015084 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.015118 kubelet[2487]: E0213 20:14:55.015113 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.018524 kubelet[2487]: E0213 20:14:55.018485 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.018524 kubelet[2487]: W0213 20:14:55.018515 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.018707 kubelet[2487]: E0213 20:14:55.018544 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.018914 kubelet[2487]: E0213 20:14:55.018891 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.018955 kubelet[2487]: W0213 20:14:55.018914 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.019296 kubelet[2487]: E0213 20:14:55.018960 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.019296 kubelet[2487]: E0213 20:14:55.019229 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.019296 kubelet[2487]: W0213 20:14:55.019263 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.019296 kubelet[2487]: E0213 20:14:55.019278 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.020559 kubelet[2487]: E0213 20:14:55.019590 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.020559 kubelet[2487]: W0213 20:14:55.019603 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.020559 kubelet[2487]: E0213 20:14:55.019619 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.207103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911102509.mount: Deactivated successfully. Feb 13 20:14:55.762563 containerd[1459]: time="2025-02-13T20:14:55.762504639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:55.763516 containerd[1459]: time="2025-02-13T20:14:55.763262628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:14:55.764276 containerd[1459]: time="2025-02-13T20:14:55.763929513Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:55.765739 containerd[1459]: time="2025-02-13T20:14:55.765711852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:55.766579 containerd[1459]: time="2025-02-13T20:14:55.766550031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.029998291s" Feb 13 20:14:55.766683 containerd[1459]: time="2025-02-13T20:14:55.766667247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:14:55.768612 containerd[1459]: time="2025-02-13T20:14:55.768391235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:14:55.782738 containerd[1459]: time="2025-02-13T20:14:55.782672466Z" level=info msg="CreateContainer within sandbox \"bb95f91b8c295e4148eb091136db5e9994b276f32944635b26e5590b107517e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:14:55.790795 kubelet[2487]: E0213 20:14:55.790547 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:55.800874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445017447.mount: Deactivated successfully. Feb 13 20:14:55.802583 containerd[1459]: time="2025-02-13T20:14:55.802539259Z" level=info msg="CreateContainer within sandbox \"bb95f91b8c295e4148eb091136db5e9994b276f32944635b26e5590b107517e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ba3189c7e24574b3cf79179aa91aa4eeadde355df0d760361b1de0bbfb06ff67\"" Feb 13 20:14:55.803778 containerd[1459]: time="2025-02-13T20:14:55.803746227Z" level=info msg="StartContainer for \"ba3189c7e24574b3cf79179aa91aa4eeadde355df0d760361b1de0bbfb06ff67\"" Feb 13 20:14:55.826617 kubelet[2487]: E0213 20:14:55.826385 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.826617 kubelet[2487]: W0213 20:14:55.826415 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.826617 kubelet[2487]: E0213 20:14:55.826457 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.829324 kubelet[2487]: E0213 20:14:55.828624 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.829324 kubelet[2487]: W0213 20:14:55.828651 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.829324 kubelet[2487]: E0213 20:14:55.828677 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.831181 kubelet[2487]: E0213 20:14:55.830724 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.831181 kubelet[2487]: W0213 20:14:55.830760 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.831181 kubelet[2487]: E0213 20:14:55.830785 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.832059 kubelet[2487]: E0213 20:14:55.831482 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.832059 kubelet[2487]: W0213 20:14:55.831502 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.832059 kubelet[2487]: E0213 20:14:55.831520 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.832059 kubelet[2487]: E0213 20:14:55.831941 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:55.832059 kubelet[2487]: W0213 20:14:55.831963 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:55.832059 kubelet[2487]: E0213 20:14:55.831976 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:55.850532 systemd[1]: Started cri-containerd-ba3189c7e24574b3cf79179aa91aa4eeadde355df0d760361b1de0bbfb06ff67.scope - libcontainer container ba3189c7e24574b3cf79179aa91aa4eeadde355df0d760361b1de0bbfb06ff67. Feb 13 20:14:55.901190 containerd[1459]: time="2025-02-13T20:14:55.901139638Z" level=info msg="StartContainer for \"ba3189c7e24574b3cf79179aa91aa4eeadde355df0d760361b1de0bbfb06ff67\" returns successfully" Feb 13 20:14:56.702069 kubelet[2487]: E0213 20:14:56.701648 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:14:56.795327 kubelet[2487]: E0213 20:14:56.794538 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:56.840481 kubelet[2487]: E0213 20:14:56.840443 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.840933 kubelet[2487]: W0213 20:14:56.840701 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.840933 kubelet[2487]: E0213 20:14:56.840743 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.841253 kubelet[2487]: E0213 20:14:56.841215 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.841407 kubelet[2487]: W0213 20:14:56.841332 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.841407 kubelet[2487]: E0213 20:14:56.841356 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.841859 kubelet[2487]: E0213 20:14:56.841758 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.841859 kubelet[2487]: W0213 20:14:56.841777 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.841859 kubelet[2487]: E0213 20:14:56.841792 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.842456 kubelet[2487]: E0213 20:14:56.842233 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.842456 kubelet[2487]: W0213 20:14:56.842271 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.842456 kubelet[2487]: E0213 20:14:56.842285 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.842974 kubelet[2487]: E0213 20:14:56.842817 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.842974 kubelet[2487]: W0213 20:14:56.842832 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.842974 kubelet[2487]: E0213 20:14:56.842846 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.843334 kubelet[2487]: E0213 20:14:56.843205 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.843334 kubelet[2487]: W0213 20:14:56.843229 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.843334 kubelet[2487]: E0213 20:14:56.843270 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.843888 kubelet[2487]: E0213 20:14:56.843700 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.843888 kubelet[2487]: W0213 20:14:56.843715 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.843888 kubelet[2487]: E0213 20:14:56.843729 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.844219 kubelet[2487]: E0213 20:14:56.844118 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.844219 kubelet[2487]: W0213 20:14:56.844130 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.844219 kubelet[2487]: E0213 20:14:56.844143 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.844779 kubelet[2487]: E0213 20:14:56.844767 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.844958 kubelet[2487]: W0213 20:14:56.844840 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.844958 kubelet[2487]: E0213 20:14:56.844856 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.845420 kubelet[2487]: E0213 20:14:56.845301 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.845420 kubelet[2487]: W0213 20:14:56.845315 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.845420 kubelet[2487]: E0213 20:14:56.845328 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.845733 kubelet[2487]: E0213 20:14:56.845595 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.845733 kubelet[2487]: W0213 20:14:56.845605 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.845733 kubelet[2487]: E0213 20:14:56.845617 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.846499 kubelet[2487]: E0213 20:14:56.846082 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.846499 kubelet[2487]: W0213 20:14:56.846094 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.846499 kubelet[2487]: E0213 20:14:56.846106 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.846870 kubelet[2487]: E0213 20:14:56.846670 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.846870 kubelet[2487]: W0213 20:14:56.846683 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.846870 kubelet[2487]: E0213 20:14:56.846698 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.847089 kubelet[2487]: E0213 20:14:56.847077 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.847290 kubelet[2487]: W0213 20:14:56.847148 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.847290 kubelet[2487]: E0213 20:14:56.847164 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.847445 kubelet[2487]: E0213 20:14:56.847434 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.847531 kubelet[2487]: W0213 20:14:56.847519 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.847619 kubelet[2487]: E0213 20:14:56.847596 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.848176 kubelet[2487]: E0213 20:14:56.848011 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.848176 kubelet[2487]: W0213 20:14:56.848028 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.848176 kubelet[2487]: E0213 20:14:56.848043 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.848531 kubelet[2487]: E0213 20:14:56.848433 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.848531 kubelet[2487]: W0213 20:14:56.848446 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.848531 kubelet[2487]: E0213 20:14:56.848468 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.848800 kubelet[2487]: E0213 20:14:56.848777 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.848862 kubelet[2487]: W0213 20:14:56.848801 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.848862 kubelet[2487]: E0213 20:14:56.848826 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.849139 kubelet[2487]: E0213 20:14:56.849122 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.849200 kubelet[2487]: W0213 20:14:56.849139 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.849200 kubelet[2487]: E0213 20:14:56.849160 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.849469 kubelet[2487]: E0213 20:14:56.849454 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.849527 kubelet[2487]: W0213 20:14:56.849469 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.849527 kubelet[2487]: E0213 20:14:56.849489 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.849745 kubelet[2487]: E0213 20:14:56.849733 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.849745 kubelet[2487]: W0213 20:14:56.849744 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.849797 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.849896 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.850314 kubelet[2487]: W0213 20:14:56.849903 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.850038 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.850314 kubelet[2487]: W0213 20:14:56.850044 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.850054 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.850098 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.850261 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.850314 kubelet[2487]: W0213 20:14:56.850271 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850314 kubelet[2487]: E0213 20:14:56.850287 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.850762 kubelet[2487]: E0213 20:14:56.850439 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.850762 kubelet[2487]: W0213 20:14:56.850447 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850762 kubelet[2487]: E0213 20:14:56.850455 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.850762 kubelet[2487]: E0213 20:14:56.850626 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.850762 kubelet[2487]: W0213 20:14:56.850632 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.850762 kubelet[2487]: E0213 20:14:56.850640 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.851203 kubelet[2487]: E0213 20:14:56.851188 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.851203 kubelet[2487]: W0213 20:14:56.851201 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.851433 kubelet[2487]: E0213 20:14:56.851267 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.851433 kubelet[2487]: E0213 20:14:56.851376 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.851433 kubelet[2487]: W0213 20:14:56.851383 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.851433 kubelet[2487]: E0213 20:14:56.851409 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.851834 kubelet[2487]: E0213 20:14:56.851529 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.851834 kubelet[2487]: W0213 20:14:56.851666 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.851834 kubelet[2487]: E0213 20:14:56.851681 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.852134 kubelet[2487]: E0213 20:14:56.852059 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.852134 kubelet[2487]: W0213 20:14:56.852073 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.852134 kubelet[2487]: E0213 20:14:56.852087 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.852660 kubelet[2487]: E0213 20:14:56.852546 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.852660 kubelet[2487]: W0213 20:14:56.852561 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.852660 kubelet[2487]: E0213 20:14:56.852576 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.853558 kubelet[2487]: E0213 20:14:56.853174 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.853558 kubelet[2487]: W0213 20:14:56.853190 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.853558 kubelet[2487]: E0213 20:14:56.853205 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:56.853842 kubelet[2487]: E0213 20:14:56.853776 2487 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:14:56.853842 kubelet[2487]: W0213 20:14:56.853791 2487 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:14:56.853842 kubelet[2487]: E0213 20:14:56.853807 2487 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:14:57.236067 containerd[1459]: time="2025-02-13T20:14:57.235952241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:57.237998 containerd[1459]: time="2025-02-13T20:14:57.237434749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:14:57.238269 containerd[1459]: time="2025-02-13T20:14:57.238133905Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:57.240279 containerd[1459]: time="2025-02-13T20:14:57.239977566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:14:57.241441 containerd[1459]: time="2025-02-13T20:14:57.241403447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.472983038s" Feb 13 20:14:57.241441 containerd[1459]: time="2025-02-13T20:14:57.241441766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:14:57.243919 containerd[1459]: time="2025-02-13T20:14:57.243880430Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:14:57.256484 containerd[1459]: time="2025-02-13T20:14:57.256436175Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd\"" Feb 13 20:14:57.259275 containerd[1459]: time="2025-02-13T20:14:57.259183535Z" level=info msg="StartContainer for \"28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd\"" Feb 13 20:14:57.268372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591139000.mount: Deactivated successfully. Feb 13 20:14:57.337579 systemd[1]: Started cri-containerd-28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd.scope - libcontainer container 28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd. Feb 13 20:14:57.402044 containerd[1459]: time="2025-02-13T20:14:57.401992404Z" level=info msg="StartContainer for \"28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd\" returns successfully" Feb 13 20:14:57.428705 systemd[1]: cri-containerd-28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd.scope: Deactivated successfully. Feb 13 20:14:57.483052 containerd[1459]: time="2025-02-13T20:14:57.477755218Z" level=info msg="shim disconnected" id=28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd namespace=k8s.io Feb 13 20:14:57.483052 containerd[1459]: time="2025-02-13T20:14:57.482296372Z" level=warning msg="cleaning up after shim disconnected" id=28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd namespace=k8s.io Feb 13 20:14:57.483052 containerd[1459]: time="2025-02-13T20:14:57.482310731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:14:57.779375 systemd[1]: run-containerd-runc-k8s.io-28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd-runc.C5E96a.mount: Deactivated successfully. Feb 13 20:14:57.779745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28a85f81b40678a1a6fd00df8d899ca1f8f655522d7304b7367f786786366ccd-rootfs.mount: Deactivated successfully. Feb 13 20:14:57.798206 kubelet[2487]: I0213 20:14:57.798130 2487 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:14:57.798726 kubelet[2487]: E0213 20:14:57.798498 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:57.800819 kubelet[2487]: E0213 20:14:57.799723 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:14:57.801758 containerd[1459]: time="2025-02-13T20:14:57.801455247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:14:57.823835 kubelet[2487]: I0213 20:14:57.823756 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-95b5574f7-wnwfc" podStartSLOduration=2.790855632 podStartE2EDuration="4.82373252s" podCreationTimestamp="2025-02-13 20:14:53 +0000 UTC" firstStartedPulling="2025-02-13 20:14:53.735056766 +0000 UTC m=+13.203580146" lastFinishedPulling="2025-02-13 20:14:55.767933668 +0000 UTC m=+15.236457034" observedRunningTime="2025-02-13 20:14:56.813108409 +0000 UTC m=+16.281631804" watchObservedRunningTime="2025-02-13 20:14:57.82373252 +0000 UTC m=+17.292255939" Feb 13 20:14:58.703085 kubelet[2487]: E0213 20:14:58.702641 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:14:59.605901 update_engine[1450]: I20250213 20:14:59.605809 1450 update_attempter.cc:509] Updating boot flags... Feb 13 20:14:59.661436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3244) Feb 13 20:14:59.737278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3248) Feb 13 20:15:00.703298 kubelet[2487]: E0213 20:15:00.702397 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:15:02.238364 containerd[1459]: time="2025-02-13T20:15:02.234906377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:02.246885 containerd[1459]: time="2025-02-13T20:15:02.245899877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:15:02.247456 containerd[1459]: time="2025-02-13T20:15:02.247400600Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:02.280938 containerd[1459]: time="2025-02-13T20:15:02.280867460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:02.284554 containerd[1459]: time="2025-02-13T20:15:02.284483227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.482984174s" Feb 13 20:15:02.286164 containerd[1459]: time="2025-02-13T20:15:02.285966652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:15:02.298790 containerd[1459]: time="2025-02-13T20:15:02.298660186Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:15:02.345167 containerd[1459]: time="2025-02-13T20:15:02.342165569Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029\"" Feb 13 20:15:02.352104 containerd[1459]: time="2025-02-13T20:15:02.352029252Z" level=info msg="StartContainer for \"0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029\"" Feb 13 20:15:02.549695 systemd[1]: Started cri-containerd-0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029.scope - libcontainer container 0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029. Feb 13 20:15:02.707527 kubelet[2487]: E0213 20:15:02.707437 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:15:02.727202 containerd[1459]: time="2025-02-13T20:15:02.725388816Z" level=info msg="StartContainer for \"0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029\" returns successfully" Feb 13 20:15:02.826091 kubelet[2487]: E0213 20:15:02.824732 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:03.828075 kubelet[2487]: E0213 20:15:03.828022 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:04.083404 systemd[1]: cri-containerd-0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029.scope: Deactivated successfully. Feb 13 20:15:04.151224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029-rootfs.mount: Deactivated successfully. Feb 13 20:15:04.174923 containerd[1459]: time="2025-02-13T20:15:04.174573497Z" level=info msg="shim disconnected" id=0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029 namespace=k8s.io Feb 13 20:15:04.174923 containerd[1459]: time="2025-02-13T20:15:04.174662816Z" level=warning msg="cleaning up after shim disconnected" id=0cf4f72843f18f9a27a2753dce9c692c3949c2d7fe145d655b67a5c068845029 namespace=k8s.io Feb 13 20:15:04.174923 containerd[1459]: time="2025-02-13T20:15:04.174676304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:04.205896 kubelet[2487]: I0213 20:15:04.204166 2487 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:15:04.313583 systemd[1]: Created slice kubepods-burstable-podd02c5ef1_b1ad_4b3b_8ff7_c248462e22fa.slice - libcontainer container kubepods-burstable-podd02c5ef1_b1ad_4b3b_8ff7_c248462e22fa.slice. Feb 13 20:15:04.337389 systemd[1]: Created slice kubepods-besteffort-podea4ae1c6_3de5_48e2_9f2a_afbb0ca69570.slice - libcontainer container kubepods-besteffort-podea4ae1c6_3de5_48e2_9f2a_afbb0ca69570.slice. Feb 13 20:15:04.356865 systemd[1]: Created slice kubepods-besteffort-pod57bc2770_4aec_4375_85f6_bbb47a5304af.slice - libcontainer container kubepods-besteffort-pod57bc2770_4aec_4375_85f6_bbb47a5304af.slice. Feb 13 20:15:04.368933 systemd[1]: Created slice kubepods-besteffort-pod56118977_ed07_436f_9631_a73a6dbd0a3a.slice - libcontainer container kubepods-besteffort-pod56118977_ed07_436f_9631_a73a6dbd0a3a.slice. Feb 13 20:15:04.380620 systemd[1]: Created slice kubepods-burstable-pod263edc08_b986_475d_a3d0_6d21aa1462c9.slice - libcontainer container kubepods-burstable-pod263edc08_b986_475d_a3d0_6d21aa1462c9.slice. Feb 13 20:15:04.429502 kubelet[2487]: I0213 20:15:04.429233 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzg5c\" (UniqueName: \"kubernetes.io/projected/56118977-ed07-436f-9631-a73a6dbd0a3a-kube-api-access-rzg5c\") pod \"calico-kube-controllers-8665865df-9q9vt\" (UID: \"56118977-ed07-436f-9631-a73a6dbd0a3a\") " pod="calico-system/calico-kube-controllers-8665865df-9q9vt" Feb 13 20:15:04.430460 kubelet[2487]: I0213 20:15:04.430228 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpm8\" (UniqueName: \"kubernetes.io/projected/57bc2770-4aec-4375-85f6-bbb47a5304af-kube-api-access-djpm8\") pod \"calico-apiserver-5c485547b7-qsz9v\" (UID: \"57bc2770-4aec-4375-85f6-bbb47a5304af\") " pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" Feb 13 20:15:04.430460 kubelet[2487]: I0213 20:15:04.430384 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf45m\" (UniqueName: \"kubernetes.io/projected/d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa-kube-api-access-wf45m\") pod \"coredns-668d6bf9bc-zxxg2\" (UID: \"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa\") " pod="kube-system/coredns-668d6bf9bc-zxxg2" Feb 13 20:15:04.431602 kubelet[2487]: I0213 20:15:04.430425 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/263edc08-b986-475d-a3d0-6d21aa1462c9-config-volume\") pod \"coredns-668d6bf9bc-dd2xl\" (UID: \"263edc08-b986-475d-a3d0-6d21aa1462c9\") " pod="kube-system/coredns-668d6bf9bc-dd2xl" Feb 13 20:15:04.431602 kubelet[2487]: I0213 20:15:04.430689 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nh7\" (UniqueName: \"kubernetes.io/projected/263edc08-b986-475d-a3d0-6d21aa1462c9-kube-api-access-s6nh7\") pod \"coredns-668d6bf9bc-dd2xl\" (UID: \"263edc08-b986-475d-a3d0-6d21aa1462c9\") " pod="kube-system/coredns-668d6bf9bc-dd2xl" Feb 13 20:15:04.431602 kubelet[2487]: I0213 20:15:04.430739 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56118977-ed07-436f-9631-a73a6dbd0a3a-tigera-ca-bundle\") pod \"calico-kube-controllers-8665865df-9q9vt\" (UID: \"56118977-ed07-436f-9631-a73a6dbd0a3a\") " pod="calico-system/calico-kube-controllers-8665865df-9q9vt" Feb 13 20:15:04.431602 kubelet[2487]: I0213 20:15:04.430775 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa-config-volume\") pod \"coredns-668d6bf9bc-zxxg2\" (UID: \"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa\") " pod="kube-system/coredns-668d6bf9bc-zxxg2" Feb 13 20:15:04.431602 kubelet[2487]: I0213 20:15:04.430813 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570-calico-apiserver-certs\") pod \"calico-apiserver-5c485547b7-b8cmn\" (UID: \"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570\") " pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" Feb 13 20:15:04.431837 kubelet[2487]: I0213 20:15:04.430845 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57bc2770-4aec-4375-85f6-bbb47a5304af-calico-apiserver-certs\") pod \"calico-apiserver-5c485547b7-qsz9v\" (UID: \"57bc2770-4aec-4375-85f6-bbb47a5304af\") " pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" Feb 13 20:15:04.431837 kubelet[2487]: I0213 20:15:04.430891 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqbh\" (UniqueName: \"kubernetes.io/projected/ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570-kube-api-access-mkqbh\") pod \"calico-apiserver-5c485547b7-b8cmn\" (UID: \"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570\") " pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" Feb 13 20:15:04.627380 kubelet[2487]: E0213 20:15:04.626921 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:04.630294 containerd[1459]: time="2025-02-13T20:15:04.630149466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxxg2,Uid:d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:04.649708 containerd[1459]: time="2025-02-13T20:15:04.648057708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-b8cmn,Uid:ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:15:04.686140 kubelet[2487]: E0213 20:15:04.685666 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:04.691944 containerd[1459]: time="2025-02-13T20:15:04.690819505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dd2xl,Uid:263edc08-b986-475d-a3d0-6d21aa1462c9,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:04.701755 containerd[1459]: time="2025-02-13T20:15:04.700601487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-qsz9v,Uid:57bc2770-4aec-4375-85f6-bbb47a5304af,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:15:04.701755 containerd[1459]: time="2025-02-13T20:15:04.700963832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8665865df-9q9vt,Uid:56118977-ed07-436f-9631-a73a6dbd0a3a,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:04.731027 systemd[1]: Created slice kubepods-besteffort-pod3c75929e_d081_42af_bef0_99987551ea46.slice - libcontainer container kubepods-besteffort-pod3c75929e_d081_42af_bef0_99987551ea46.slice. Feb 13 20:15:04.760009 containerd[1459]: time="2025-02-13T20:15:04.759766732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvlgz,Uid:3c75929e-d081-42af-bef0-99987551ea46,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:04.914534 kubelet[2487]: E0213 20:15:04.914381 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:04.922892 containerd[1459]: time="2025-02-13T20:15:04.922636104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:15:05.445459 containerd[1459]: time="2025-02-13T20:15:05.445357536Z" level=error msg="Failed to destroy network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.451401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc-shm.mount: Deactivated successfully. Feb 13 20:15:05.461616 containerd[1459]: time="2025-02-13T20:15:05.461156736Z" level=error msg="encountered an error cleaning up failed sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.461616 containerd[1459]: time="2025-02-13T20:15:05.461297534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8665865df-9q9vt,Uid:56118977-ed07-436f-9631-a73a6dbd0a3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.471381 containerd[1459]: time="2025-02-13T20:15:05.469422825Z" level=error msg="Failed to destroy network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.471381 containerd[1459]: time="2025-02-13T20:15:05.469878352Z" level=error msg="encountered an error cleaning up failed sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.471381 containerd[1459]: time="2025-02-13T20:15:05.469958968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvlgz,Uid:3c75929e-d081-42af-bef0-99987551ea46,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.474887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6-shm.mount: Deactivated successfully. Feb 13 20:15:05.483748 kubelet[2487]: E0213 20:15:05.483659 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.483748 kubelet[2487]: E0213 20:15:05.483753 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.484150 kubelet[2487]: E0213 20:15:05.483834 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:15:05.484150 kubelet[2487]: E0213 20:15:05.483868 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvlgz" Feb 13 20:15:05.484150 kubelet[2487]: E0213 20:15:05.484009 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvlgz_calico-system(3c75929e-d081-42af-bef0-99987551ea46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvlgz_calico-system(3c75929e-d081-42af-bef0-99987551ea46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:15:05.485486 kubelet[2487]: E0213 20:15:05.485414 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8665865df-9q9vt" Feb 13 20:15:05.486666 kubelet[2487]: E0213 20:15:05.485803 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8665865df-9q9vt" Feb 13 20:15:05.486666 kubelet[2487]: E0213 20:15:05.486008 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8665865df-9q9vt_calico-system(56118977-ed07-436f-9631-a73a6dbd0a3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8665865df-9q9vt_calico-system(56118977-ed07-436f-9631-a73a6dbd0a3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8665865df-9q9vt" podUID="56118977-ed07-436f-9631-a73a6dbd0a3a" Feb 13 20:15:05.488742 containerd[1459]: time="2025-02-13T20:15:05.488675818Z" level=error msg="Failed to destroy network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.494412 containerd[1459]: time="2025-02-13T20:15:05.493079558Z" level=error msg="encountered an error cleaning up failed sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.494412 containerd[1459]: time="2025-02-13T20:15:05.493164430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxxg2,Uid:d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.494412 containerd[1459]: time="2025-02-13T20:15:05.493345464Z" level=error msg="Failed to destroy network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.497610 kubelet[2487]: E0213 20:15:05.495977 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.497610 kubelet[2487]: E0213 20:15:05.496052 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zxxg2" Feb 13 20:15:05.497610 kubelet[2487]: E0213 20:15:05.496091 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zxxg2" Feb 13 20:15:05.497790 kubelet[2487]: E0213 20:15:05.496154 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zxxg2_kube-system(d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zxxg2_kube-system(d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zxxg2" podUID="d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa" Feb 13 20:15:05.498545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a-shm.mount: Deactivated successfully. Feb 13 20:15:05.500604 containerd[1459]: time="2025-02-13T20:15:05.498853641Z" level=error msg="encountered an error cleaning up failed sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.500604 containerd[1459]: time="2025-02-13T20:15:05.498987916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-b8cmn,Uid:ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.505919 kubelet[2487]: E0213 20:15:05.505753 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.505919 kubelet[2487]: E0213 20:15:05.505842 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" Feb 13 20:15:05.506372 kubelet[2487]: E0213 20:15:05.506126 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" Feb 13 20:15:05.506050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc-shm.mount: Deactivated successfully. Feb 13 20:15:05.507907 kubelet[2487]: E0213 20:15:05.506605 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c485547b7-b8cmn_calico-apiserver(ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c485547b7-b8cmn_calico-apiserver(ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" podUID="ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570" Feb 13 20:15:05.516140 containerd[1459]: time="2025-02-13T20:15:05.516071584Z" level=error msg="Failed to destroy network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.518230 containerd[1459]: time="2025-02-13T20:15:05.517937981Z" level=error msg="encountered an error cleaning up failed sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.518230 containerd[1459]: time="2025-02-13T20:15:05.518041388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dd2xl,Uid:263edc08-b986-475d-a3d0-6d21aa1462c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.522014 kubelet[2487]: E0213 20:15:05.521631 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.522014 kubelet[2487]: E0213 20:15:05.521715 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dd2xl" Feb 13 20:15:05.522014 kubelet[2487]: E0213 20:15:05.521751 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dd2xl" Feb 13 20:15:05.522349 kubelet[2487]: E0213 20:15:05.521804 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dd2xl_kube-system(263edc08-b986-475d-a3d0-6d21aa1462c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dd2xl_kube-system(263edc08-b986-475d-a3d0-6d21aa1462c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dd2xl" podUID="263edc08-b986-475d-a3d0-6d21aa1462c9" Feb 13 20:15:05.532917 containerd[1459]: time="2025-02-13T20:15:05.532410618Z" level=error msg="Failed to destroy network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.532917 containerd[1459]: time="2025-02-13T20:15:05.532779811Z" level=error msg="encountered an error cleaning up failed sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.532917 containerd[1459]: time="2025-02-13T20:15:05.532859286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-qsz9v,Uid:57bc2770-4aec-4375-85f6-bbb47a5304af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.533817 kubelet[2487]: E0213 20:15:05.533546 2487 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:05.533817 kubelet[2487]: E0213 20:15:05.533631 2487 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" Feb 13 20:15:05.533817 kubelet[2487]: E0213 20:15:05.533659 2487 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" Feb 13 20:15:05.534045 kubelet[2487]: E0213 20:15:05.533744 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c485547b7-qsz9v_calico-apiserver(57bc2770-4aec-4375-85f6-bbb47a5304af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c485547b7-qsz9v_calico-apiserver(57bc2770-4aec-4375-85f6-bbb47a5304af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" podUID="57bc2770-4aec-4375-85f6-bbb47a5304af" Feb 13 20:15:05.920159 kubelet[2487]: I0213 20:15:05.917228 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:05.926572 kubelet[2487]: I0213 20:15:05.925707 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:05.928000 containerd[1459]: time="2025-02-13T20:15:05.927929988Z" level=info msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" Feb 13 20:15:05.931682 containerd[1459]: time="2025-02-13T20:15:05.931378223Z" level=info msg="Ensure that sandbox de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea in task-service has been cleanup successfully" Feb 13 20:15:05.936160 containerd[1459]: time="2025-02-13T20:15:05.935941860Z" level=info msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" Feb 13 20:15:05.936477 containerd[1459]: time="2025-02-13T20:15:05.936295683Z" level=info msg="Ensure that sandbox c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc in task-service has been cleanup successfully" Feb 13 20:15:05.940310 kubelet[2487]: I0213 20:15:05.939771 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:05.941602 containerd[1459]: time="2025-02-13T20:15:05.941541986Z" level=info msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" Feb 13 20:15:05.942721 containerd[1459]: time="2025-02-13T20:15:05.942680551Z" level=info msg="Ensure that sandbox 8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6 in task-service has been cleanup successfully" Feb 13 20:15:05.947754 kubelet[2487]: I0213 20:15:05.946872 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:05.950388 containerd[1459]: time="2025-02-13T20:15:05.950216310Z" level=info msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" Feb 13 20:15:05.951418 containerd[1459]: time="2025-02-13T20:15:05.951308095Z" level=info msg="Ensure that sandbox 8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc in task-service has been cleanup successfully" Feb 13 20:15:05.963705 kubelet[2487]: I0213 20:15:05.963329 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:05.965754 containerd[1459]: time="2025-02-13T20:15:05.965703201Z" level=info msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" Feb 13 20:15:05.969773 containerd[1459]: time="2025-02-13T20:15:05.967951882Z" level=info msg="Ensure that sandbox 68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5 in task-service has been cleanup successfully" Feb 13 20:15:05.973057 kubelet[2487]: I0213 20:15:05.972819 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:05.974502 containerd[1459]: time="2025-02-13T20:15:05.973921570Z" level=info msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" Feb 13 20:15:05.976894 containerd[1459]: time="2025-02-13T20:15:05.976596690Z" level=info msg="Ensure that sandbox e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a in task-service has been cleanup successfully" Feb 13 20:15:06.099992 containerd[1459]: time="2025-02-13T20:15:06.099902043Z" level=error msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" failed" error="failed to destroy network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.100853 kubelet[2487]: E0213 20:15:06.100793 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:06.101019 kubelet[2487]: E0213 20:15:06.100885 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6"} Feb 13 20:15:06.101019 kubelet[2487]: E0213 20:15:06.100983 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c75929e-d081-42af-bef0-99987551ea46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.101206 kubelet[2487]: E0213 20:15:06.101034 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c75929e-d081-42af-bef0-99987551ea46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvlgz" podUID="3c75929e-d081-42af-bef0-99987551ea46" Feb 13 20:15:06.105513 containerd[1459]: time="2025-02-13T20:15:06.105433830Z" level=error msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" failed" error="failed to destroy network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.107193 kubelet[2487]: E0213 20:15:06.106564 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:06.107193 kubelet[2487]: E0213 20:15:06.106662 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea"} Feb 13 20:15:06.107193 kubelet[2487]: E0213 20:15:06.106713 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"263edc08-b986-475d-a3d0-6d21aa1462c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.107193 kubelet[2487]: E0213 20:15:06.106749 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"263edc08-b986-475d-a3d0-6d21aa1462c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dd2xl" podUID="263edc08-b986-475d-a3d0-6d21aa1462c9" Feb 13 20:15:06.135623 containerd[1459]: time="2025-02-13T20:15:06.135546454Z" level=error msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" failed" error="failed to destroy network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.136456 kubelet[2487]: E0213 20:15:06.136191 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:06.136456 kubelet[2487]: E0213 20:15:06.136291 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc"} Feb 13 20:15:06.136456 kubelet[2487]: E0213 20:15:06.136344 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.136456 kubelet[2487]: E0213 20:15:06.136379 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" podUID="ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570" Feb 13 20:15:06.141172 containerd[1459]: time="2025-02-13T20:15:06.140880829Z" level=error msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" failed" error="failed to destroy network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.141504 kubelet[2487]: E0213 20:15:06.141394 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:06.141504 kubelet[2487]: E0213 20:15:06.141483 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc"} Feb 13 20:15:06.141694 kubelet[2487]: E0213 20:15:06.141554 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56118977-ed07-436f-9631-a73a6dbd0a3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.141694 kubelet[2487]: E0213 20:15:06.141596 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56118977-ed07-436f-9631-a73a6dbd0a3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8665865df-9q9vt" podUID="56118977-ed07-436f-9631-a73a6dbd0a3a" Feb 13 20:15:06.152953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5-shm.mount: Deactivated successfully. Feb 13 20:15:06.153133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea-shm.mount: Deactivated successfully. Feb 13 20:15:06.162753 containerd[1459]: time="2025-02-13T20:15:06.162675309Z" level=error msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" failed" error="failed to destroy network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.163093 kubelet[2487]: E0213 20:15:06.163019 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:06.163093 kubelet[2487]: E0213 20:15:06.163093 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a"} Feb 13 20:15:06.163390 kubelet[2487]: E0213 20:15:06.163152 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.163390 kubelet[2487]: E0213 20:15:06.163189 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zxxg2" podUID="d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa" Feb 13 20:15:06.167606 containerd[1459]: time="2025-02-13T20:15:06.167451692Z" level=error msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" failed" error="failed to destroy network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:06.168156 kubelet[2487]: E0213 20:15:06.168067 2487 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:06.168360 kubelet[2487]: E0213 20:15:06.168154 2487 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5"} Feb 13 20:15:06.168360 kubelet[2487]: E0213 20:15:06.168231 2487 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57bc2770-4aec-4375-85f6-bbb47a5304af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:06.168360 kubelet[2487]: E0213 20:15:06.168306 2487 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57bc2770-4aec-4375-85f6-bbb47a5304af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" podUID="57bc2770-4aec-4375-85f6-bbb47a5304af" Feb 13 20:15:11.977743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946132249.mount: Deactivated successfully. Feb 13 20:15:12.058726 containerd[1459]: time="2025-02-13T20:15:12.044222478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:15:12.062132 containerd[1459]: time="2025-02-13T20:15:12.062058378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:12.081953 containerd[1459]: time="2025-02-13T20:15:12.081900703Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:12.092499 containerd[1459]: time="2025-02-13T20:15:12.092437509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:12.093941 containerd[1459]: time="2025-02-13T20:15:12.093844025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.171125732s" Feb 13 20:15:12.094062 containerd[1459]: time="2025-02-13T20:15:12.093919158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:15:12.151393 containerd[1459]: time="2025-02-13T20:15:12.151332126Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:15:12.193163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687984921.mount: Deactivated successfully. Feb 13 20:15:12.224584 containerd[1459]: time="2025-02-13T20:15:12.223744790Z" level=info msg="CreateContainer within sandbox \"6df2c7cd33159e02c790d9b2500f09ddd62c063f8f28123029236e2fdca7a68a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197\"" Feb 13 20:15:12.225280 containerd[1459]: time="2025-02-13T20:15:12.224857111Z" level=info msg="StartContainer for \"810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197\"" Feb 13 20:15:12.377046 systemd[1]: Started cri-containerd-810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197.scope - libcontainer container 810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197. Feb 13 20:15:12.436055 containerd[1459]: time="2025-02-13T20:15:12.435883319Z" level=info msg="StartContainer for \"810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197\" returns successfully" Feb 13 20:15:12.635439 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:15:12.635777 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:15:13.082472 kubelet[2487]: I0213 20:15:13.063815 2487 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:15:13.087546 kubelet[2487]: E0213 20:15:13.087495 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:13.088363 kubelet[2487]: E0213 20:15:13.087794 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:13.147525 kubelet[2487]: I0213 20:15:13.147445 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gg2w5" podStartSLOduration=1.838851337 podStartE2EDuration="20.147410707s" podCreationTimestamp="2025-02-13 20:14:53 +0000 UTC" firstStartedPulling="2025-02-13 20:14:53.786803494 +0000 UTC m=+13.255326873" lastFinishedPulling="2025-02-13 20:15:12.095362873 +0000 UTC m=+31.563886243" observedRunningTime="2025-02-13 20:15:13.139369122 +0000 UTC m=+32.607892527" watchObservedRunningTime="2025-02-13 20:15:13.147410707 +0000 UTC m=+32.615934114" Feb 13 20:15:14.070995 kubelet[2487]: E0213 20:15:14.070929 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:14.736299 kernel: bpftool[3819]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:15:15.038156 systemd-networkd[1367]: vxlan.calico: Link UP Feb 13 20:15:15.038170 systemd-networkd[1367]: vxlan.calico: Gained carrier Feb 13 20:15:16.479465 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Feb 13 20:15:17.702772 containerd[1459]: time="2025-02-13T20:15:17.702724830Z" level=info msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.801 [INFO][3908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.802 [INFO][3908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" iface="eth0" netns="/var/run/netns/cni-a5f7860b-67ea-2e6a-d1e9-8afc49fc7702" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.802 [INFO][3908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" iface="eth0" netns="/var/run/netns/cni-a5f7860b-67ea-2e6a-d1e9-8afc49fc7702" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.805 [INFO][3908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" iface="eth0" netns="/var/run/netns/cni-a5f7860b-67ea-2e6a-d1e9-8afc49fc7702" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.805 [INFO][3908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.805 [INFO][3908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.991 [INFO][3914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.993 [INFO][3914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:17.993 [INFO][3914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:18.006 [WARNING][3914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:18.006 [INFO][3914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:18.008 [INFO][3914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:18.017281 containerd[1459]: 2025-02-13 20:15:18.010 [INFO][3908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:18.018205 containerd[1459]: time="2025-02-13T20:15:18.018120404Z" level=info msg="TearDown network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" successfully" Feb 13 20:15:18.018205 containerd[1459]: time="2025-02-13T20:15:18.018191184Z" level=info msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" returns successfully" Feb 13 20:15:18.021813 systemd[1]: run-netns-cni\x2da5f7860b\x2d67ea\x2d2e6a\x2dd1e9\x2d8afc49fc7702.mount: Deactivated successfully. Feb 13 20:15:18.023454 containerd[1459]: time="2025-02-13T20:15:18.022348229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-b8cmn,Uid:ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:15:18.240614 systemd-networkd[1367]: cali7305ff68990: Link UP Feb 13 20:15:18.241637 systemd-networkd[1367]: cali7305ff68990: Gained carrier Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.115 [INFO][3923] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0 calico-apiserver-5c485547b7- calico-apiserver ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570 765 0 2025-02-13 20:14:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c485547b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c calico-apiserver-5c485547b7-b8cmn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7305ff68990 [] []}} ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.116 [INFO][3923] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.167 [INFO][3936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" HandleID="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.181 [INFO][3936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" HandleID="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042b990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"calico-apiserver-5c485547b7-b8cmn", "timestamp":"2025-02-13 20:15:18.167129873 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.181 [INFO][3936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.181 [INFO][3936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.181 [INFO][3936] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.184 [INFO][3936] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.196 [INFO][3936] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.204 [INFO][3936] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.207 [INFO][3936] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.210 [INFO][3936] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.210 [INFO][3936] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.213 [INFO][3936] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7 Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.220 [INFO][3936] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.230 [INFO][3936] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.193/26] block=192.168.115.192/26 handle="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.230 [INFO][3936] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.193/26] handle="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.230 [INFO][3936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:18.267945 containerd[1459]: 2025-02-13 20:15:18.230 [INFO][3936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.193/26] IPv6=[] ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" HandleID="k8s-pod-network.10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.234 [INFO][3923] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"calico-apiserver-5c485547b7-b8cmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7305ff68990", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.234 [INFO][3923] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.193/32] ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.234 [INFO][3923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7305ff68990 ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.241 [INFO][3923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.242 [INFO][3923] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7", Pod:"calico-apiserver-5c485547b7-b8cmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7305ff68990", MAC:"a6:00:14:31:99:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:18.270001 containerd[1459]: 2025-02-13 20:15:18.259 [INFO][3923] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-b8cmn" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:18.325587 containerd[1459]: time="2025-02-13T20:15:18.325421238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:18.325751 containerd[1459]: time="2025-02-13T20:15:18.325623366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:18.325751 containerd[1459]: time="2025-02-13T20:15:18.325685237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:18.326131 containerd[1459]: time="2025-02-13T20:15:18.326061178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:18.362538 systemd[1]: Started cri-containerd-10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7.scope - libcontainer container 10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7. Feb 13 20:15:18.419559 containerd[1459]: time="2025-02-13T20:15:18.419515231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-b8cmn,Uid:ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7\"" Feb 13 20:15:18.424278 containerd[1459]: time="2025-02-13T20:15:18.423676414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:15:19.703070 containerd[1459]: time="2025-02-13T20:15:19.703022495Z" level=info msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" Feb 13 20:15:19.704362 containerd[1459]: time="2025-02-13T20:15:19.704287144Z" level=info msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.801 [INFO][4025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.802 [INFO][4025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" iface="eth0" netns="/var/run/netns/cni-0a2e468b-1d75-8526-5c48-c9a937915f7a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.802 [INFO][4025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" iface="eth0" netns="/var/run/netns/cni-0a2e468b-1d75-8526-5c48-c9a937915f7a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.803 [INFO][4025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" iface="eth0" netns="/var/run/netns/cni-0a2e468b-1d75-8526-5c48-c9a937915f7a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.803 [INFO][4025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.804 [INFO][4025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.870 [INFO][4033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.870 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.870 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.885 [WARNING][4033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.885 [INFO][4033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.888 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:19.901183 containerd[1459]: 2025-02-13 20:15:19.895 [INFO][4025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:19.901996 containerd[1459]: time="2025-02-13T20:15:19.901420309Z" level=info msg="TearDown network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" successfully" Feb 13 20:15:19.901996 containerd[1459]: time="2025-02-13T20:15:19.901471735Z" level=info msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" returns successfully" Feb 13 20:15:19.903373 kubelet[2487]: E0213 20:15:19.902954 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:19.906547 systemd[1]: run-netns-cni\x2d0a2e468b\x2d1d75\x2d8526\x2d5c48\x2dc9a937915f7a.mount: Deactivated successfully. Feb 13 20:15:19.913035 containerd[1459]: time="2025-02-13T20:15:19.912954020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxxg2,Uid:d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa,Namespace:kube-system,Attempt:1,}" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.820 [INFO][4021] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.820 [INFO][4021] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" iface="eth0" netns="/var/run/netns/cni-1fa55c17-86f9-9137-b87f-8ce45e283255" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.821 [INFO][4021] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" iface="eth0" netns="/var/run/netns/cni-1fa55c17-86f9-9137-b87f-8ce45e283255" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.822 [INFO][4021] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" iface="eth0" netns="/var/run/netns/cni-1fa55c17-86f9-9137-b87f-8ce45e283255" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.822 [INFO][4021] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.822 [INFO][4021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.882 [INFO][4037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.882 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.888 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.914 [WARNING][4037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.914 [INFO][4037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.918 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:19.926430 containerd[1459]: 2025-02-13 20:15:19.921 [INFO][4021] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:19.927833 containerd[1459]: time="2025-02-13T20:15:19.927512867Z" level=info msg="TearDown network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" successfully" Feb 13 20:15:19.927833 containerd[1459]: time="2025-02-13T20:15:19.927553836Z" level=info msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" returns successfully" Feb 13 20:15:19.932050 containerd[1459]: time="2025-02-13T20:15:19.931750026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvlgz,Uid:3c75929e-d081-42af-bef0-99987551ea46,Namespace:calico-system,Attempt:1,}" Feb 13 20:15:19.936228 systemd[1]: run-netns-cni\x2d1fa55c17\x2d86f9\x2d9137\x2db87f\x2d8ce45e283255.mount: Deactivated successfully. Feb 13 20:15:19.998800 systemd-networkd[1367]: cali7305ff68990: Gained IPv6LL Feb 13 20:15:20.267608 systemd-networkd[1367]: calieaa6eca53c2: Link UP Feb 13 20:15:20.269285 systemd-networkd[1367]: calieaa6eca53c2: Gained carrier Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.036 [INFO][4046] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0 coredns-668d6bf9bc- kube-system d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa 777 0 2025-02-13 20:14:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c coredns-668d6bf9bc-zxxg2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieaa6eca53c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.037 [INFO][4046] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.138 [INFO][4068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" HandleID="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.167 [INFO][4068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" HandleID="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050de0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"coredns-668d6bf9bc-zxxg2", "timestamp":"2025-02-13 20:15:20.138968624 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.167 [INFO][4068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.168 [INFO][4068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.168 [INFO][4068] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.172 [INFO][4068] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.186 [INFO][4068] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.210 [INFO][4068] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.217 [INFO][4068] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.223 [INFO][4068] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.223 [INFO][4068] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.226 [INFO][4068] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52 Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.237 [INFO][4068] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.253 [INFO][4068] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.194/26] block=192.168.115.192/26 handle="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.253 [INFO][4068] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.194/26] handle="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.253 [INFO][4068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:20.308677 containerd[1459]: 2025-02-13 20:15:20.253 [INFO][4068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.194/26] IPv6=[] ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" HandleID="k8s-pod-network.1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.258 [INFO][4046] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"coredns-668d6bf9bc-zxxg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa6eca53c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.259 [INFO][4046] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.194/32] ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.259 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieaa6eca53c2 ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.271 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.274 [INFO][4046] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52", Pod:"coredns-668d6bf9bc-zxxg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa6eca53c2", MAC:"be:be:da:85:67:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:20.310518 containerd[1459]: 2025-02-13 20:15:20.298 [INFO][4046] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxxg2" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:20.401691 systemd-networkd[1367]: calicd173aff915: Link UP Feb 13 20:15:20.403044 systemd-networkd[1367]: calicd173aff915: Gained carrier Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.061 [INFO][4059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0 csi-node-driver- calico-system 3c75929e-d081-42af-bef0-99987551ea46 778 0 2025-02-13 20:14:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c csi-node-driver-mvlgz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd173aff915 [] []}} ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.062 [INFO][4059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.189 [INFO][4073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" HandleID="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.216 [INFO][4073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" HandleID="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041a840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"csi-node-driver-mvlgz", "timestamp":"2025-02-13 20:15:20.189903852 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.216 [INFO][4073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.253 [INFO][4073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.254 [INFO][4073] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.273 [INFO][4073] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.293 [INFO][4073] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.319 [INFO][4073] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.334 [INFO][4073] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.345 [INFO][4073] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.345 [INFO][4073] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.352 [INFO][4073] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5 Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.363 [INFO][4073] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.382 [INFO][4073] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.195/26] block=192.168.115.192/26 handle="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.383 [INFO][4073] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.195/26] handle="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.383 [INFO][4073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:20.456285 containerd[1459]: 2025-02-13 20:15:20.383 [INFO][4073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.195/26] IPv6=[] ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" HandleID="k8s-pod-network.3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.390 [INFO][4059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c75929e-d081-42af-bef0-99987551ea46", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"csi-node-driver-mvlgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd173aff915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.390 [INFO][4059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.195/32] ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.391 [INFO][4059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd173aff915 ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.408 [INFO][4059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.410 [INFO][4059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c75929e-d081-42af-bef0-99987551ea46", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5", Pod:"csi-node-driver-mvlgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd173aff915", MAC:"9e:4b:d4:b1:04:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:20.456968 containerd[1459]: 2025-02-13 20:15:20.447 [INFO][4059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5" Namespace="calico-system" Pod="csi-node-driver-mvlgz" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:20.505744 containerd[1459]: time="2025-02-13T20:15:20.504871132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:20.505744 containerd[1459]: time="2025-02-13T20:15:20.504946572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:20.505744 containerd[1459]: time="2025-02-13T20:15:20.504965290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:20.507730 containerd[1459]: time="2025-02-13T20:15:20.507454112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:20.546184 containerd[1459]: time="2025-02-13T20:15:20.543533728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:20.547662 containerd[1459]: time="2025-02-13T20:15:20.547540492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:20.547823 containerd[1459]: time="2025-02-13T20:15:20.547681054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:20.549982 containerd[1459]: time="2025-02-13T20:15:20.549844851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:20.571958 systemd[1]: Started cri-containerd-1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52.scope - libcontainer container 1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52. Feb 13 20:15:20.599938 systemd[1]: Started cri-containerd-3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5.scope - libcontainer container 3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5. Feb 13 20:15:20.708198 containerd[1459]: time="2025-02-13T20:15:20.707628856Z" level=info msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" Feb 13 20:15:20.714115 containerd[1459]: time="2025-02-13T20:15:20.708021775Z" level=info msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" Feb 13 20:15:20.729589 containerd[1459]: time="2025-02-13T20:15:20.729344990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvlgz,Uid:3c75929e-d081-42af-bef0-99987551ea46,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5\"" Feb 13 20:15:20.730652 containerd[1459]: time="2025-02-13T20:15:20.730586404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxxg2,Uid:d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52\"" Feb 13 20:15:20.733401 kubelet[2487]: E0213 20:15:20.733360 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:20.743018 containerd[1459]: time="2025-02-13T20:15:20.741289598Z" level=info msg="CreateContainer within sandbox \"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:15:20.829968 containerd[1459]: time="2025-02-13T20:15:20.829427486Z" level=info msg="CreateContainer within sandbox \"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abd1b3eedfca31e61dfc2f6e937661329c471f00a5600bc71a101a18542dbb76\"" Feb 13 20:15:20.839254 containerd[1459]: time="2025-02-13T20:15:20.839020224Z" level=info msg="StartContainer for \"abd1b3eedfca31e61dfc2f6e937661329c471f00a5600bc71a101a18542dbb76\"" Feb 13 20:15:20.941577 systemd[1]: Started cri-containerd-abd1b3eedfca31e61dfc2f6e937661329c471f00a5600bc71a101a18542dbb76.scope - libcontainer container abd1b3eedfca31e61dfc2f6e937661329c471f00a5600bc71a101a18542dbb76. Feb 13 20:15:21.058948 containerd[1459]: time="2025-02-13T20:15:21.058735526Z" level=info msg="StartContainer for \"abd1b3eedfca31e61dfc2f6e937661329c471f00a5600bc71a101a18542dbb76\" returns successfully" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.959 [INFO][4227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.961 [INFO][4227] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" iface="eth0" netns="/var/run/netns/cni-c5a3a104-3ed5-f669-44a1-2371d4e66d58" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.962 [INFO][4227] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" iface="eth0" netns="/var/run/netns/cni-c5a3a104-3ed5-f669-44a1-2371d4e66d58" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.963 [INFO][4227] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" iface="eth0" netns="/var/run/netns/cni-c5a3a104-3ed5-f669-44a1-2371d4e66d58" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.963 [INFO][4227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:20.963 [INFO][4227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.033 [INFO][4265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.033 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.033 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.048 [WARNING][4265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.048 [INFO][4265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.051 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:21.064343 containerd[1459]: 2025-02-13 20:15:21.055 [INFO][4227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:21.066641 containerd[1459]: time="2025-02-13T20:15:21.065392084Z" level=info msg="TearDown network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" successfully" Feb 13 20:15:21.066641 containerd[1459]: time="2025-02-13T20:15:21.065802308Z" level=info msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" returns successfully" Feb 13 20:15:21.067495 containerd[1459]: time="2025-02-13T20:15:21.067342179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-qsz9v,Uid:57bc2770-4aec-4375-85f6-bbb47a5304af,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:15:21.073994 systemd[1]: run-netns-cni\x2dc5a3a104\x2d3ed5\x2df669\x2d44a1\x2d2371d4e66d58.mount: Deactivated successfully. Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.926 [INFO][4222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.926 [INFO][4222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" iface="eth0" netns="/var/run/netns/cni-40e32bd2-9b4c-3530-844f-b8f36174d027" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.928 [INFO][4222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" iface="eth0" netns="/var/run/netns/cni-40e32bd2-9b4c-3530-844f-b8f36174d027" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.929 [INFO][4222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" iface="eth0" netns="/var/run/netns/cni-40e32bd2-9b4c-3530-844f-b8f36174d027" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.929 [INFO][4222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:20.929 [INFO][4222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.054 [INFO][4255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.054 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.054 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.081 [WARNING][4255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.081 [INFO][4255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.086 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:21.102384 containerd[1459]: 2025-02-13 20:15:21.090 [INFO][4222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:21.102384 containerd[1459]: time="2025-02-13T20:15:21.100191290Z" level=info msg="TearDown network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" successfully" Feb 13 20:15:21.102384 containerd[1459]: time="2025-02-13T20:15:21.100314325Z" level=info msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" returns successfully" Feb 13 20:15:21.109559 containerd[1459]: time="2025-02-13T20:15:21.106617581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8665865df-9q9vt,Uid:56118977-ed07-436f-9631-a73a6dbd0a3a,Namespace:calico-system,Attempt:1,}" Feb 13 20:15:21.108169 systemd[1]: run-netns-cni\x2d40e32bd2\x2d9b4c\x2d3530\x2d844f\x2db8f36174d027.mount: Deactivated successfully. Feb 13 20:15:21.127161 kubelet[2487]: E0213 20:15:21.126895 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:21.494076 systemd-networkd[1367]: calic15fb987785: Link UP Feb 13 20:15:21.495624 systemd-networkd[1367]: calic15fb987785: Gained carrier Feb 13 20:15:21.542059 kubelet[2487]: I0213 20:15:21.541889 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zxxg2" podStartSLOduration=34.541733227 podStartE2EDuration="34.541733227s" podCreationTimestamp="2025-02-13 20:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:21.163351596 +0000 UTC m=+40.631874997" watchObservedRunningTime="2025-02-13 20:15:21.541733227 +0000 UTC m=+41.010256613" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.231 [INFO][4286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0 calico-apiserver-5c485547b7- calico-apiserver 57bc2770-4aec-4375-85f6-bbb47a5304af 795 0 2025-02-13 20:14:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c485547b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c calico-apiserver-5c485547b7-qsz9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic15fb987785 [] []}} ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.231 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.342 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" HandleID="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.367 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" HandleID="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002846b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"calico-apiserver-5c485547b7-qsz9v", "timestamp":"2025-02-13 20:15:21.342408486 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.370 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.370 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.370 [INFO][4308] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.377 [INFO][4308] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.386 [INFO][4308] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.401 [INFO][4308] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.420 [INFO][4308] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.437 [INFO][4308] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.439 [INFO][4308] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.451 [INFO][4308] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52 Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.462 [INFO][4308] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.479 [INFO][4308] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.196/26] block=192.168.115.192/26 handle="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.479 [INFO][4308] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.196/26] handle="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.479 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:21.554249 containerd[1459]: 2025-02-13 20:15:21.479 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.196/26] IPv6=[] ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" HandleID="k8s-pod-network.c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.485 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"57bc2770-4aec-4375-85f6-bbb47a5304af", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"calico-apiserver-5c485547b7-qsz9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic15fb987785", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.485 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.196/32] ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.485 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15fb987785 ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.495 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.498 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"57bc2770-4aec-4375-85f6-bbb47a5304af", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52", Pod:"calico-apiserver-5c485547b7-qsz9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic15fb987785", MAC:"fe:8c:93:3d:2c:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:21.556525 containerd[1459]: 2025-02-13 20:15:21.545 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52" Namespace="calico-apiserver" Pod="calico-apiserver-5c485547b7-qsz9v" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:21.656927 containerd[1459]: time="2025-02-13T20:15:21.654157681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:21.656927 containerd[1459]: time="2025-02-13T20:15:21.654218932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:21.656927 containerd[1459]: time="2025-02-13T20:15:21.654247655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:21.656927 containerd[1459]: time="2025-02-13T20:15:21.654358197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:21.663334 systemd-networkd[1367]: calieaa6eca53c2: Gained IPv6LL Feb 13 20:15:21.706056 containerd[1459]: time="2025-02-13T20:15:21.706009146Z" level=info msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" Feb 13 20:15:21.727686 systemd-networkd[1367]: cali73c53dd1feb: Link UP Feb 13 20:15:21.737963 systemd-networkd[1367]: cali73c53dd1feb: Gained carrier Feb 13 20:15:21.746504 systemd[1]: Started cri-containerd-c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52.scope - libcontainer container c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52. Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.383 [INFO][4295] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0 calico-kube-controllers-8665865df- calico-system 56118977-ed07-436f-9631-a73a6dbd0a3a 794 0 2025-02-13 20:14:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8665865df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c calico-kube-controllers-8665865df-9q9vt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73c53dd1feb [] []}} ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.384 [INFO][4295] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.501 [INFO][4322] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" HandleID="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.524 [INFO][4322] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" HandleID="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"calico-kube-controllers-8665865df-9q9vt", "timestamp":"2025-02-13 20:15:21.501117075 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.524 [INFO][4322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.524 [INFO][4322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.524 [INFO][4322] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.540 [INFO][4322] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.622 [INFO][4322] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.642 [INFO][4322] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.648 [INFO][4322] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.654 [INFO][4322] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.654 [INFO][4322] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.660 [INFO][4322] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985 Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.673 [INFO][4322] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.694 [INFO][4322] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.197/26] block=192.168.115.192/26 handle="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.694 [INFO][4322] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.197/26] handle="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.697 [INFO][4322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:21.785154 containerd[1459]: 2025-02-13 20:15:21.697 [INFO][4322] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.197/26] IPv6=[] ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" HandleID="k8s-pod-network.e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.702 [INFO][4295] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0", GenerateName:"calico-kube-controllers-8665865df-", Namespace:"calico-system", SelfLink:"", UID:"56118977-ed07-436f-9631-a73a6dbd0a3a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8665865df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"calico-kube-controllers-8665865df-9q9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c53dd1feb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.702 [INFO][4295] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.197/32] ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.702 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73c53dd1feb ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.738 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.747 [INFO][4295] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0", GenerateName:"calico-kube-controllers-8665865df-", Namespace:"calico-system", SelfLink:"", UID:"56118977-ed07-436f-9631-a73a6dbd0a3a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8665865df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985", Pod:"calico-kube-controllers-8665865df-9q9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c53dd1feb", MAC:"8a:c1:21:bf:0d:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:21.786076 containerd[1459]: 2025-02-13 20:15:21.774 [INFO][4295] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985" Namespace="calico-system" Pod="calico-kube-controllers-8665865df-9q9vt" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:21.872553 containerd[1459]: time="2025-02-13T20:15:21.872465373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c485547b7-qsz9v,Uid:57bc2770-4aec-4375-85f6-bbb47a5304af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52\"" Feb 13 20:15:21.898873 containerd[1459]: time="2025-02-13T20:15:21.898441507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:21.900278 containerd[1459]: time="2025-02-13T20:15:21.899662043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:21.901616 containerd[1459]: time="2025-02-13T20:15:21.900826659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:21.906060 containerd[1459]: time="2025-02-13T20:15:21.904566058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:21.981481 systemd[1]: Started cri-containerd-e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985.scope - libcontainer container e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985. Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.946 [INFO][4390] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.948 [INFO][4390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" iface="eth0" netns="/var/run/netns/cni-4cd55eec-2bc9-02dc-14fa-aae57cc8a93f" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.949 [INFO][4390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" iface="eth0" netns="/var/run/netns/cni-4cd55eec-2bc9-02dc-14fa-aae57cc8a93f" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.949 [INFO][4390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" iface="eth0" netns="/var/run/netns/cni-4cd55eec-2bc9-02dc-14fa-aae57cc8a93f" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.949 [INFO][4390] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:21.949 [INFO][4390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.032 [INFO][4438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.032 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.034 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.046 [WARNING][4438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.046 [INFO][4438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.050 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:22.062776 containerd[1459]: 2025-02-13 20:15:22.056 [INFO][4390] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:22.068008 containerd[1459]: time="2025-02-13T20:15:22.067495547Z" level=info msg="TearDown network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" successfully" Feb 13 20:15:22.068008 containerd[1459]: time="2025-02-13T20:15:22.067540398Z" level=info msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" returns successfully" Feb 13 20:15:22.068048 systemd[1]: run-netns-cni\x2d4cd55eec\x2d2bc9\x2d02dc\x2d14fa\x2daae57cc8a93f.mount: Deactivated successfully. Feb 13 20:15:22.070842 kubelet[2487]: E0213 20:15:22.069217 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:22.072822 containerd[1459]: time="2025-02-13T20:15:22.072319895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dd2xl,Uid:263edc08-b986-475d-a3d0-6d21aa1462c9,Namespace:kube-system,Attempt:1,}" Feb 13 20:15:22.103934 containerd[1459]: time="2025-02-13T20:15:22.103799435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8665865df-9q9vt,Uid:56118977-ed07-436f-9631-a73a6dbd0a3a,Namespace:calico-system,Attempt:1,} returns sandbox id \"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985\"" Feb 13 20:15:22.149317 kubelet[2487]: E0213 20:15:22.148280 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:22.175429 systemd-networkd[1367]: calicd173aff915: Gained IPv6LL Feb 13 20:15:22.527483 systemd-networkd[1367]: calib78f3b9dcce: Link UP Feb 13 20:15:22.532905 systemd-networkd[1367]: calib78f3b9dcce: Gained carrier Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.265 [INFO][4461] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0 coredns-668d6bf9bc- kube-system 263edc08-b986-475d-a3d0-6d21aa1462c9 812 0 2025-02-13 20:14:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-9-4d1da4e47c coredns-668d6bf9bc-dd2xl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib78f3b9dcce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.266 [INFO][4461] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.377 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" HandleID="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.411 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" HandleID="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3b90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-9-4d1da4e47c", "pod":"coredns-668d6bf9bc-dd2xl", "timestamp":"2025-02-13 20:15:22.377144683 +0000 UTC"}, Hostname:"ci-4081.3.1-9-4d1da4e47c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.411 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.411 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.411 [INFO][4471] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-9-4d1da4e47c' Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.425 [INFO][4471] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.436 [INFO][4471] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.448 [INFO][4471] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.451 [INFO][4471] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.458 [INFO][4471] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.459 [INFO][4471] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.470 [INFO][4471] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1 Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.482 [INFO][4471] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.510 [INFO][4471] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.198/26] block=192.168.115.192/26 handle="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.510 [INFO][4471] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.198/26] handle="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" host="ci-4081.3.1-9-4d1da4e47c" Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.511 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:22.567071 containerd[1459]: 2025-02-13 20:15:22.511 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.198/26] IPv6=[] ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" HandleID="k8s-pod-network.632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.517 [INFO][4461] cni-plugin/k8s.go 386: Populated endpoint ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"263edc08-b986-475d-a3d0-6d21aa1462c9", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"", Pod:"coredns-668d6bf9bc-dd2xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78f3b9dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.518 [INFO][4461] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.198/32] ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.518 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib78f3b9dcce ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.529 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.530 [INFO][4461] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"263edc08-b986-475d-a3d0-6d21aa1462c9", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1", Pod:"coredns-668d6bf9bc-dd2xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78f3b9dcce", MAC:"52:89:1b:f2:67:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:22.567965 containerd[1459]: 2025-02-13 20:15:22.557 [INFO][4461] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dd2xl" WorkloadEndpoint="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:22.612695 containerd[1459]: time="2025-02-13T20:15:22.610966686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:22.612695 containerd[1459]: time="2025-02-13T20:15:22.611030377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:22.612695 containerd[1459]: time="2025-02-13T20:15:22.611040999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:22.612695 containerd[1459]: time="2025-02-13T20:15:22.611141230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:22.650491 systemd[1]: Started cri-containerd-632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1.scope - libcontainer container 632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1. Feb 13 20:15:22.725938 containerd[1459]: time="2025-02-13T20:15:22.725885947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dd2xl,Uid:263edc08-b986-475d-a3d0-6d21aa1462c9,Namespace:kube-system,Attempt:1,} returns sandbox id \"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1\"" Feb 13 20:15:22.727500 kubelet[2487]: E0213 20:15:22.727455 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:22.736202 containerd[1459]: time="2025-02-13T20:15:22.735948122Z" level=info msg="CreateContainer within sandbox \"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:15:22.752143 containerd[1459]: time="2025-02-13T20:15:22.751889683Z" level=info msg="CreateContainer within sandbox \"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ea99d44f6545c260a127cc41fe08f304405a8976952b4b488f77998fa7daeae\"" Feb 13 20:15:22.757291 containerd[1459]: time="2025-02-13T20:15:22.754572086Z" level=info msg="StartContainer for \"6ea99d44f6545c260a127cc41fe08f304405a8976952b4b488f77998fa7daeae\"" Feb 13 20:15:22.812739 systemd[1]: Started cri-containerd-6ea99d44f6545c260a127cc41fe08f304405a8976952b4b488f77998fa7daeae.scope - libcontainer container 6ea99d44f6545c260a127cc41fe08f304405a8976952b4b488f77998fa7daeae. Feb 13 20:15:22.868256 containerd[1459]: time="2025-02-13T20:15:22.868190641Z" level=info msg="StartContainer for \"6ea99d44f6545c260a127cc41fe08f304405a8976952b4b488f77998fa7daeae\" returns successfully" Feb 13 20:15:22.911886 containerd[1459]: time="2025-02-13T20:15:22.911833540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.915820 containerd[1459]: time="2025-02-13T20:15:22.915634525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:15:22.916495 containerd[1459]: time="2025-02-13T20:15:22.916465176Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.920217 containerd[1459]: time="2025-02-13T20:15:22.920037135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.921538 containerd[1459]: time="2025-02-13T20:15:22.921451682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.49772988s" Feb 13 20:15:22.921711 containerd[1459]: time="2025-02-13T20:15:22.921685058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:15:22.925435 containerd[1459]: time="2025-02-13T20:15:22.925053820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:15:22.927887 containerd[1459]: time="2025-02-13T20:15:22.927491057Z" level=info msg="CreateContainer within sandbox \"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:15:22.944057 containerd[1459]: time="2025-02-13T20:15:22.943402737Z" level=info msg="CreateContainer within sandbox \"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59e3657ef30458ed473b43bee2385e0d538b3b64d6ef4a88b1dc894286495c61\"" Feb 13 20:15:22.945182 containerd[1459]: time="2025-02-13T20:15:22.945078412Z" level=info msg="StartContainer for \"59e3657ef30458ed473b43bee2385e0d538b3b64d6ef4a88b1dc894286495c61\"" Feb 13 20:15:23.016555 systemd[1]: Started cri-containerd-59e3657ef30458ed473b43bee2385e0d538b3b64d6ef4a88b1dc894286495c61.scope - libcontainer container 59e3657ef30458ed473b43bee2385e0d538b3b64d6ef4a88b1dc894286495c61. Feb 13 20:15:23.079670 containerd[1459]: time="2025-02-13T20:15:23.079413346Z" level=info msg="StartContainer for \"59e3657ef30458ed473b43bee2385e0d538b3b64d6ef4a88b1dc894286495c61\" returns successfully" Feb 13 20:15:23.153759 kubelet[2487]: E0213 20:15:23.153722 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:23.159679 kubelet[2487]: E0213 20:15:23.159518 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:23.210084 kubelet[2487]: I0213 20:15:23.210011 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dd2xl" podStartSLOduration=36.209985951 podStartE2EDuration="36.209985951s" podCreationTimestamp="2025-02-13 20:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:23.192880132 +0000 UTC m=+42.661403529" watchObservedRunningTime="2025-02-13 20:15:23.209985951 +0000 UTC m=+42.678509339" Feb 13 20:15:23.518822 systemd-networkd[1367]: calic15fb987785: Gained IPv6LL Feb 13 20:15:23.583277 systemd-networkd[1367]: cali73c53dd1feb: Gained IPv6LL Feb 13 20:15:24.030736 systemd-networkd[1367]: calib78f3b9dcce: Gained IPv6LL Feb 13 20:15:24.163500 kubelet[2487]: E0213 20:15:24.162830 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:24.165862 kubelet[2487]: E0213 20:15:24.165216 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:24.373487 kubelet[2487]: I0213 20:15:24.373317 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c485547b7-b8cmn" podStartSLOduration=27.871177901 podStartE2EDuration="32.373297397s" podCreationTimestamp="2025-02-13 20:14:52 +0000 UTC" firstStartedPulling="2025-02-13 20:15:18.421557792 +0000 UTC m=+37.890081159" lastFinishedPulling="2025-02-13 20:15:22.923677277 +0000 UTC m=+42.392200655" observedRunningTime="2025-02-13 20:15:23.237752138 +0000 UTC m=+42.706275535" watchObservedRunningTime="2025-02-13 20:15:24.373297397 +0000 UTC m=+43.841820783" Feb 13 20:15:24.523306 containerd[1459]: time="2025-02-13T20:15:24.523150303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.525533 containerd[1459]: time="2025-02-13T20:15:24.525351820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:15:24.526515 containerd[1459]: time="2025-02-13T20:15:24.526477562Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.529104 containerd[1459]: time="2025-02-13T20:15:24.528728291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.530844 containerd[1459]: time="2025-02-13T20:15:24.530789152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.605682186s" Feb 13 20:15:24.531064 containerd[1459]: time="2025-02-13T20:15:24.531040374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:15:24.532883 containerd[1459]: time="2025-02-13T20:15:24.532590678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:15:24.536438 containerd[1459]: time="2025-02-13T20:15:24.536372442Z" level=info msg="CreateContainer within sandbox \"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:15:24.567019 containerd[1459]: time="2025-02-13T20:15:24.566869088Z" level=info msg="CreateContainer within sandbox \"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6cba7e26a42c804a8979c391d21875a6b36b885b7852cc2e3e9f29908ba1b0d4\"" Feb 13 20:15:24.567701 containerd[1459]: time="2025-02-13T20:15:24.567669270Z" level=info msg="StartContainer for \"6cba7e26a42c804a8979c391d21875a6b36b885b7852cc2e3e9f29908ba1b0d4\"" Feb 13 20:15:24.614604 systemd[1]: Started cri-containerd-6cba7e26a42c804a8979c391d21875a6b36b885b7852cc2e3e9f29908ba1b0d4.scope - libcontainer container 6cba7e26a42c804a8979c391d21875a6b36b885b7852cc2e3e9f29908ba1b0d4. Feb 13 20:15:24.656696 containerd[1459]: time="2025-02-13T20:15:24.656471276Z" level=info msg="StartContainer for \"6cba7e26a42c804a8979c391d21875a6b36b885b7852cc2e3e9f29908ba1b0d4\" returns successfully" Feb 13 20:15:24.994158 containerd[1459]: time="2025-02-13T20:15:24.994100530Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.994999 containerd[1459]: time="2025-02-13T20:15:24.994914989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:15:25.004620 containerd[1459]: time="2025-02-13T20:15:25.004353663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 471.719629ms" Feb 13 20:15:25.004620 containerd[1459]: time="2025-02-13T20:15:25.004403641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:15:25.006141 containerd[1459]: time="2025-02-13T20:15:25.005755937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:15:25.010124 containerd[1459]: time="2025-02-13T20:15:25.010067501Z" level=info msg="CreateContainer within sandbox \"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:15:25.025085 containerd[1459]: time="2025-02-13T20:15:25.024946348Z" level=info msg="CreateContainer within sandbox \"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5fc1794f35dffff18a22658eef91fbeb4b85893a8439b8e0fd34ccc838dd704a\"" Feb 13 20:15:25.027609 containerd[1459]: time="2025-02-13T20:15:25.027560162Z" level=info msg="StartContainer for \"5fc1794f35dffff18a22658eef91fbeb4b85893a8439b8e0fd34ccc838dd704a\"" Feb 13 20:15:25.082564 systemd[1]: Started cri-containerd-5fc1794f35dffff18a22658eef91fbeb4b85893a8439b8e0fd34ccc838dd704a.scope - libcontainer container 5fc1794f35dffff18a22658eef91fbeb4b85893a8439b8e0fd34ccc838dd704a. Feb 13 20:15:25.147288 containerd[1459]: time="2025-02-13T20:15:25.147025703Z" level=info msg="StartContainer for \"5fc1794f35dffff18a22658eef91fbeb4b85893a8439b8e0fd34ccc838dd704a\" returns successfully" Feb 13 20:15:25.175970 kubelet[2487]: E0213 20:15:25.175026 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:26.176864 kubelet[2487]: I0213 20:15:26.176833 2487 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:15:27.276216 containerd[1459]: time="2025-02-13T20:15:27.275571351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:27.277018 containerd[1459]: time="2025-02-13T20:15:27.276932309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:15:27.280740 containerd[1459]: time="2025-02-13T20:15:27.280109197Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:27.281267 containerd[1459]: time="2025-02-13T20:15:27.280945000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.275109882s" Feb 13 20:15:27.281267 containerd[1459]: time="2025-02-13T20:15:27.280987905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:15:27.283015 containerd[1459]: time="2025-02-13T20:15:27.282923921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:27.284710 containerd[1459]: time="2025-02-13T20:15:27.284046554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:15:27.323141 containerd[1459]: time="2025-02-13T20:15:27.322959086Z" level=info msg="CreateContainer within sandbox \"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:15:27.343225 containerd[1459]: time="2025-02-13T20:15:27.342895565Z" level=info msg="CreateContainer within sandbox \"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2\"" Feb 13 20:15:27.344857 containerd[1459]: time="2025-02-13T20:15:27.344806677Z" level=info msg="StartContainer for \"72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2\"" Feb 13 20:15:27.415620 systemd[1]: Started cri-containerd-72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2.scope - libcontainer container 72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2. Feb 13 20:15:27.532041 containerd[1459]: time="2025-02-13T20:15:27.531894040Z" level=info msg="StartContainer for \"72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2\" returns successfully" Feb 13 20:15:28.230395 kubelet[2487]: I0213 20:15:28.230057 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c485547b7-qsz9v" podStartSLOduration=33.105725682 podStartE2EDuration="36.230026169s" podCreationTimestamp="2025-02-13 20:14:52 +0000 UTC" firstStartedPulling="2025-02-13 20:15:21.88109503 +0000 UTC m=+41.349618407" lastFinishedPulling="2025-02-13 20:15:25.005395476 +0000 UTC m=+44.473918894" observedRunningTime="2025-02-13 20:15:25.195901626 +0000 UTC m=+44.664425042" watchObservedRunningTime="2025-02-13 20:15:28.230026169 +0000 UTC m=+47.698549552" Feb 13 20:15:28.308631 systemd[1]: run-containerd-runc-k8s.io-72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2-runc.VmBPKD.mount: Deactivated successfully. Feb 13 20:15:28.391660 kubelet[2487]: I0213 20:15:28.391564 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8665865df-9q9vt" podStartSLOduration=30.217908915 podStartE2EDuration="35.389987286s" podCreationTimestamp="2025-02-13 20:14:53 +0000 UTC" firstStartedPulling="2025-02-13 20:15:22.111383964 +0000 UTC m=+41.579907330" lastFinishedPulling="2025-02-13 20:15:27.283462322 +0000 UTC m=+46.751985701" observedRunningTime="2025-02-13 20:15:28.230785493 +0000 UTC m=+47.699308897" watchObservedRunningTime="2025-02-13 20:15:28.389987286 +0000 UTC m=+47.858510686" Feb 13 20:15:28.776672 containerd[1459]: time="2025-02-13T20:15:28.776617834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.777587 containerd[1459]: time="2025-02-13T20:15:28.777500242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:15:28.778133 containerd[1459]: time="2025-02-13T20:15:28.778090200Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.780548 containerd[1459]: time="2025-02-13T20:15:28.780509285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.781551 containerd[1459]: time="2025-02-13T20:15:28.781403496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.49731398s" Feb 13 20:15:28.781551 containerd[1459]: time="2025-02-13T20:15:28.781444483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:15:28.784355 containerd[1459]: time="2025-02-13T20:15:28.784216765Z" level=info msg="CreateContainer within sandbox \"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:15:28.800367 containerd[1459]: time="2025-02-13T20:15:28.800316937Z" level=info msg="CreateContainer within sandbox \"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a49ea5fa1bfc9ae8431f61713b118949f84724b52e52656279db965841dbcca\"" Feb 13 20:15:28.803709 containerd[1459]: time="2025-02-13T20:15:28.801549218Z" level=info msg="StartContainer for \"2a49ea5fa1bfc9ae8431f61713b118949f84724b52e52656279db965841dbcca\"" Feb 13 20:15:28.849563 systemd[1]: Started cri-containerd-2a49ea5fa1bfc9ae8431f61713b118949f84724b52e52656279db965841dbcca.scope - libcontainer container 2a49ea5fa1bfc9ae8431f61713b118949f84724b52e52656279db965841dbcca. Feb 13 20:15:28.894390 containerd[1459]: time="2025-02-13T20:15:28.894347650Z" level=info msg="StartContainer for \"2a49ea5fa1bfc9ae8431f61713b118949f84724b52e52656279db965841dbcca\" returns successfully" Feb 13 20:15:29.209305 kubelet[2487]: I0213 20:15:29.208077 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvlgz" podStartSLOduration=28.159977094 podStartE2EDuration="36.208057842s" podCreationTimestamp="2025-02-13 20:14:53 +0000 UTC" firstStartedPulling="2025-02-13 20:15:20.734272204 +0000 UTC m=+40.202795571" lastFinishedPulling="2025-02-13 20:15:28.782352951 +0000 UTC m=+48.250876319" observedRunningTime="2025-02-13 20:15:29.207144014 +0000 UTC m=+48.675667404" watchObservedRunningTime="2025-02-13 20:15:29.208057842 +0000 UTC m=+48.676581228" Feb 13 20:15:29.962001 kubelet[2487]: I0213 20:15:29.959698 2487 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:15:29.964844 kubelet[2487]: I0213 20:15:29.964744 2487 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:15:34.582790 systemd[1]: Started sshd@7-146.190.40.231:22-147.75.109.163:44894.service - OpenSSH per-connection server daemon (147.75.109.163:44894). Feb 13 20:15:34.708942 sshd[4823]: Accepted publickey for core from 147.75.109.163 port 44894 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:34.710546 sshd[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:34.723157 systemd-logind[1449]: New session 8 of user core. Feb 13 20:15:34.730566 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:15:35.327301 sshd[4823]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:35.335222 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:15:35.336593 systemd[1]: sshd@7-146.190.40.231:22-147.75.109.163:44894.service: Deactivated successfully. Feb 13 20:15:35.338785 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:15:35.340061 systemd-logind[1449]: Removed session 8. Feb 13 20:15:40.351744 systemd[1]: Started sshd@8-146.190.40.231:22-147.75.109.163:57264.service - OpenSSH per-connection server daemon (147.75.109.163:57264). Feb 13 20:15:40.411892 sshd[4843]: Accepted publickey for core from 147.75.109.163 port 57264 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:40.414280 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:40.423677 systemd-logind[1449]: New session 9 of user core. Feb 13 20:15:40.425490 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:15:40.609856 sshd[4843]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:40.615679 systemd[1]: sshd@8-146.190.40.231:22-147.75.109.163:57264.service: Deactivated successfully. Feb 13 20:15:40.619616 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:15:40.625427 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:15:40.628364 systemd-logind[1449]: Removed session 9. Feb 13 20:15:40.727197 containerd[1459]: time="2025-02-13T20:15:40.726335520Z" level=info msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.878 [WARNING][4870] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c75929e-d081-42af-bef0-99987551ea46", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5", Pod:"csi-node-driver-mvlgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd173aff915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.881 [INFO][4870] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.881 [INFO][4870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" iface="eth0" netns="" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.881 [INFO][4870] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.881 [INFO][4870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.925 [INFO][4876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.925 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.925 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.934 [WARNING][4876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.934 [INFO][4876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.937 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:40.944316 containerd[1459]: 2025-02-13 20:15:40.941 [INFO][4870] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:40.944316 containerd[1459]: time="2025-02-13T20:15:40.944062009Z" level=info msg="TearDown network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" successfully" Feb 13 20:15:40.944316 containerd[1459]: time="2025-02-13T20:15:40.944091744Z" level=info msg="StopPodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" returns successfully" Feb 13 20:15:40.996299 containerd[1459]: time="2025-02-13T20:15:40.994943978Z" level=info msg="RemovePodSandbox for \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" Feb 13 20:15:40.998058 containerd[1459]: time="2025-02-13T20:15:40.997965203Z" level=info msg="Forcibly stopping sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\"" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.065 [WARNING][4894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c75929e-d081-42af-bef0-99987551ea46", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"3d965e9b95bba10a75ac85f68075f56ed8c5005ff3a44090374fd7607c5b43f5", Pod:"csi-node-driver-mvlgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd173aff915", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.066 [INFO][4894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.066 [INFO][4894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" iface="eth0" netns="" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.066 [INFO][4894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.066 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.107 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.107 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.107 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.115 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.116 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" HandleID="k8s-pod-network.8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-csi--node--driver--mvlgz-eth0" Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.119 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.124384 containerd[1459]: 2025-02-13 20:15:41.121 [INFO][4894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6" Feb 13 20:15:41.125160 containerd[1459]: time="2025-02-13T20:15:41.124439758Z" level=info msg="TearDown network for sandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" successfully" Feb 13 20:15:41.142171 containerd[1459]: time="2025-02-13T20:15:41.141868360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:41.154857 containerd[1459]: time="2025-02-13T20:15:41.154582881Z" level=info msg="RemovePodSandbox \"8a32a9419b76bf782b909681cf944157b8aeed643f05afd93df42ccd472ce6f6\" returns successfully" Feb 13 20:15:41.178949 containerd[1459]: time="2025-02-13T20:15:41.178896771Z" level=info msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.234 [WARNING][4918] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"57bc2770-4aec-4375-85f6-bbb47a5304af", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52", Pod:"calico-apiserver-5c485547b7-qsz9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic15fb987785", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.234 [INFO][4918] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.234 [INFO][4918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" iface="eth0" netns="" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.235 [INFO][4918] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.235 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.269 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.270 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.270 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.278 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.278 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.280 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.285138 containerd[1459]: 2025-02-13 20:15:41.283 [INFO][4918] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.286012 containerd[1459]: time="2025-02-13T20:15:41.285173958Z" level=info msg="TearDown network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" successfully" Feb 13 20:15:41.286012 containerd[1459]: time="2025-02-13T20:15:41.285210684Z" level=info msg="StopPodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" returns successfully" Feb 13 20:15:41.287027 containerd[1459]: time="2025-02-13T20:15:41.286981697Z" level=info msg="RemovePodSandbox for \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" Feb 13 20:15:41.287159 containerd[1459]: time="2025-02-13T20:15:41.287035307Z" level=info msg="Forcibly stopping sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\"" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.358 [WARNING][4943] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"57bc2770-4aec-4375-85f6-bbb47a5304af", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"c55c42cfa88768633293aea7013e4406a6afdc7bd1f478f86031f21809fb3d52", Pod:"calico-apiserver-5c485547b7-qsz9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic15fb987785", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.358 [INFO][4943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.358 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" iface="eth0" netns="" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.358 [INFO][4943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.358 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.390 [INFO][4949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.391 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.391 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.400 [WARNING][4949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.400 [INFO][4949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" HandleID="k8s-pod-network.68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--qsz9v-eth0" Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.406 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.420199 containerd[1459]: 2025-02-13 20:15:41.413 [INFO][4943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5" Feb 13 20:15:41.420199 containerd[1459]: time="2025-02-13T20:15:41.419203269Z" level=info msg="TearDown network for sandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" successfully" Feb 13 20:15:41.422975 containerd[1459]: time="2025-02-13T20:15:41.422931506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:41.423221 containerd[1459]: time="2025-02-13T20:15:41.423045617Z" level=info msg="RemovePodSandbox \"68fd4adee6c875d4a11a9309f5d01d86cb194d75fb25339112228398b042eba5\" returns successfully" Feb 13 20:15:41.424005 containerd[1459]: time="2025-02-13T20:15:41.423886725Z" level=info msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.513 [WARNING][4967] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52", Pod:"coredns-668d6bf9bc-zxxg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa6eca53c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.514 [INFO][4967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.514 [INFO][4967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" iface="eth0" netns="" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.514 [INFO][4967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.514 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.543 [INFO][4973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.543 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.543 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.551 [WARNING][4973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.551 [INFO][4973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.553 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.558278 containerd[1459]: 2025-02-13 20:15:41.555 [INFO][4967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.558278 containerd[1459]: time="2025-02-13T20:15:41.558091140Z" level=info msg="TearDown network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" successfully" Feb 13 20:15:41.558278 containerd[1459]: time="2025-02-13T20:15:41.558124345Z" level=info msg="StopPodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" returns successfully" Feb 13 20:15:41.559905 containerd[1459]: time="2025-02-13T20:15:41.559851763Z" level=info msg="RemovePodSandbox for \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" Feb 13 20:15:41.560002 containerd[1459]: time="2025-02-13T20:15:41.559930366Z" level=info msg="Forcibly stopping sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\"" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.624 [WARNING][4991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d02c5ef1-b1ad-4b3b-8ff7-c248462e22fa", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"1e0019baf9a87ea74f406bec9e24da77c95b14a25c0ef8f603ccfc49cdba7b52", Pod:"coredns-668d6bf9bc-zxxg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa6eca53c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.625 [INFO][4991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.625 [INFO][4991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" iface="eth0" netns="" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.625 [INFO][4991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.625 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.666 [INFO][4997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.667 [INFO][4997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.667 [INFO][4997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.678 [WARNING][4997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.678 [INFO][4997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" HandleID="k8s-pod-network.e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--zxxg2-eth0" Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.681 [INFO][4997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.687326 containerd[1459]: 2025-02-13 20:15:41.683 [INFO][4991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a" Feb 13 20:15:41.687326 containerd[1459]: time="2025-02-13T20:15:41.686020762Z" level=info msg="TearDown network for sandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" successfully" Feb 13 20:15:41.689817 containerd[1459]: time="2025-02-13T20:15:41.689765313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:41.690292 containerd[1459]: time="2025-02-13T20:15:41.689871320Z" level=info msg="RemovePodSandbox \"e0416cb9f9086505ff78b5823b649e4eac317392c6d063ee5eb60e10b29ab89a\" returns successfully" Feb 13 20:15:41.691008 containerd[1459]: time="2025-02-13T20:15:41.690661890Z" level=info msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.746 [WARNING][5015] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"263edc08-b986-475d-a3d0-6d21aa1462c9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1", Pod:"coredns-668d6bf9bc-dd2xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78f3b9dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.747 [INFO][5015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.747 [INFO][5015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" iface="eth0" netns="" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.747 [INFO][5015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.747 [INFO][5015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.770 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.770 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.770 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.777 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.777 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.779 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.782900 containerd[1459]: 2025-02-13 20:15:41.781 [INFO][5015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.784962 containerd[1459]: time="2025-02-13T20:15:41.783597160Z" level=info msg="TearDown network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" successfully" Feb 13 20:15:41.784962 containerd[1459]: time="2025-02-13T20:15:41.783628598Z" level=info msg="StopPodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" returns successfully" Feb 13 20:15:41.784962 containerd[1459]: time="2025-02-13T20:15:41.784180165Z" level=info msg="RemovePodSandbox for \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" Feb 13 20:15:41.784962 containerd[1459]: time="2025-02-13T20:15:41.784214426Z" level=info msg="Forcibly stopping sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\"" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.828 [WARNING][5039] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"263edc08-b986-475d-a3d0-6d21aa1462c9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"632bd776f60f9dfe40e5ced277e1601c77a9903c37c255ba4470cb49919226d1", Pod:"coredns-668d6bf9bc-dd2xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78f3b9dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.828 [INFO][5039] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.828 [INFO][5039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" iface="eth0" netns="" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.828 [INFO][5039] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.828 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.855 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.855 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.855 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.862 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.862 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" HandleID="k8s-pod-network.de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-coredns--668d6bf9bc--dd2xl-eth0" Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.864 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.868578 containerd[1459]: 2025-02-13 20:15:41.866 [INFO][5039] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea" Feb 13 20:15:41.868578 containerd[1459]: time="2025-02-13T20:15:41.868416890Z" level=info msg="TearDown network for sandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" successfully" Feb 13 20:15:41.873169 containerd[1459]: time="2025-02-13T20:15:41.873113320Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:41.873300 containerd[1459]: time="2025-02-13T20:15:41.873218814Z" level=info msg="RemovePodSandbox \"de78dbd7bfd55ac0d55abdd0358dccfe9ec20de8ca377e8b3766ba3620f3baea\" returns successfully" Feb 13 20:15:41.874135 containerd[1459]: time="2025-02-13T20:15:41.874101165Z" level=info msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.922 [WARNING][5063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0", GenerateName:"calico-kube-controllers-8665865df-", Namespace:"calico-system", SelfLink:"", UID:"56118977-ed07-436f-9631-a73a6dbd0a3a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8665865df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985", Pod:"calico-kube-controllers-8665865df-9q9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c53dd1feb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.922 [INFO][5063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.922 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" iface="eth0" netns="" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.922 [INFO][5063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.922 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.947 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.947 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.947 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.954 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.954 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.957 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:41.960824 containerd[1459]: 2025-02-13 20:15:41.958 [INFO][5063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:41.961339 containerd[1459]: time="2025-02-13T20:15:41.960881408Z" level=info msg="TearDown network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" successfully" Feb 13 20:15:41.961339 containerd[1459]: time="2025-02-13T20:15:41.960907313Z" level=info msg="StopPodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" returns successfully" Feb 13 20:15:41.961643 containerd[1459]: time="2025-02-13T20:15:41.961613813Z" level=info msg="RemovePodSandbox for \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" Feb 13 20:15:41.961676 containerd[1459]: time="2025-02-13T20:15:41.961647323Z" level=info msg="Forcibly stopping sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\"" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.008 [WARNING][5087] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0", GenerateName:"calico-kube-controllers-8665865df-", Namespace:"calico-system", SelfLink:"", UID:"56118977-ed07-436f-9631-a73a6dbd0a3a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8665865df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"e74239c497e42ce855d4097a367c3262a7464ec8254af8a69cbfe2298b6e7985", Pod:"calico-kube-controllers-8665865df-9q9vt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73c53dd1feb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.008 [INFO][5087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.008 [INFO][5087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" iface="eth0" netns="" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.008 [INFO][5087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.008 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.040 [INFO][5093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.041 [INFO][5093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.041 [INFO][5093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.047 [WARNING][5093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.047 [INFO][5093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" HandleID="k8s-pod-network.8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--kube--controllers--8665865df--9q9vt-eth0" Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.050 [INFO][5093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:42.054288 containerd[1459]: 2025-02-13 20:15:42.052 [INFO][5087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc" Feb 13 20:15:42.054751 containerd[1459]: time="2025-02-13T20:15:42.054306150Z" level=info msg="TearDown network for sandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" successfully" Feb 13 20:15:42.058148 containerd[1459]: time="2025-02-13T20:15:42.058044248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:42.058331 containerd[1459]: time="2025-02-13T20:15:42.058173839Z" level=info msg="RemovePodSandbox \"8dbccc682a418a38504ea585f9b55311110eb890e0bf8ce0886091a5fcf9a7cc\" returns successfully" Feb 13 20:15:42.058897 containerd[1459]: time="2025-02-13T20:15:42.058724710Z" level=info msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.106 [WARNING][5111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7", Pod:"calico-apiserver-5c485547b7-b8cmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7305ff68990", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.106 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.106 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" iface="eth0" netns="" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.106 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.106 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.131 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.132 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.132 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.141 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.141 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.144 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:42.148852 containerd[1459]: 2025-02-13 20:15:42.146 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.149821 containerd[1459]: time="2025-02-13T20:15:42.149172825Z" level=info msg="TearDown network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" successfully" Feb 13 20:15:42.149821 containerd[1459]: time="2025-02-13T20:15:42.149207218Z" level=info msg="StopPodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" returns successfully" Feb 13 20:15:42.151143 containerd[1459]: time="2025-02-13T20:15:42.151106120Z" level=info msg="RemovePodSandbox for \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" Feb 13 20:15:42.151143 containerd[1459]: time="2025-02-13T20:15:42.151143469Z" level=info msg="Forcibly stopping sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\"" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.199 [WARNING][5135] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0", GenerateName:"calico-apiserver-5c485547b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea4ae1c6-3de5-48e2-9f2a-afbb0ca69570", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c485547b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-9-4d1da4e47c", ContainerID:"10126f003e554c51593189ed732ef8af6a71f8d0116b48c53bec4a54db04a5a7", Pod:"calico-apiserver-5c485547b7-b8cmn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7305ff68990", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.199 [INFO][5135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.199 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" iface="eth0" netns="" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.199 [INFO][5135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.199 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.223 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.223 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.223 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.234 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.234 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" HandleID="k8s-pod-network.c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Workload="ci--4081.3.1--9--4d1da4e47c-k8s-calico--apiserver--5c485547b7--b8cmn-eth0" Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.237 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:15:42.243626 containerd[1459]: 2025-02-13 20:15:42.241 [INFO][5135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc" Feb 13 20:15:42.244694 containerd[1459]: time="2025-02-13T20:15:42.243734678Z" level=info msg="TearDown network for sandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" successfully" Feb 13 20:15:42.247692 containerd[1459]: time="2025-02-13T20:15:42.247502771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:15:42.247692 containerd[1459]: time="2025-02-13T20:15:42.247569580Z" level=info msg="RemovePodSandbox \"c8ddf6f68e64c0513c670b3b0df134bc2a27a7152a88c6906a309f552e5ff2bc\" returns successfully" Feb 13 20:15:43.086497 kubelet[2487]: E0213 20:15:43.086013 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:43.114000 kubelet[2487]: I0213 20:15:43.113295 2487 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:15:43.263417 kubelet[2487]: E0213 20:15:43.263386 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:45.631625 systemd[1]: Started sshd@9-146.190.40.231:22-147.75.109.163:57268.service - OpenSSH per-connection server daemon (147.75.109.163:57268). Feb 13 20:15:45.739936 sshd[5200]: Accepted publickey for core from 147.75.109.163 port 57268 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:45.743771 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:45.750039 systemd-logind[1449]: New session 10 of user core. Feb 13 20:15:45.753459 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:15:45.923303 sshd[5200]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:45.929067 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:15:45.929834 systemd[1]: sshd@9-146.190.40.231:22-147.75.109.163:57268.service: Deactivated successfully. Feb 13 20:15:45.932087 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:15:45.934133 systemd-logind[1449]: Removed session 10. Feb 13 20:15:50.703417 kubelet[2487]: E0213 20:15:50.702890 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:50.943768 systemd[1]: Started sshd@10-146.190.40.231:22-147.75.109.163:52924.service - OpenSSH per-connection server daemon (147.75.109.163:52924). Feb 13 20:15:51.007112 sshd[5216]: Accepted publickey for core from 147.75.109.163 port 52924 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:51.009947 sshd[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:51.016744 systemd-logind[1449]: New session 11 of user core. Feb 13 20:15:51.020578 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:15:51.223789 sshd[5216]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:51.237194 systemd[1]: sshd@10-146.190.40.231:22-147.75.109.163:52924.service: Deactivated successfully. Feb 13 20:15:51.241360 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:15:51.243565 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:15:51.249018 systemd[1]: Started sshd@11-146.190.40.231:22-147.75.109.163:52936.service - OpenSSH per-connection server daemon (147.75.109.163:52936). Feb 13 20:15:51.251888 systemd-logind[1449]: Removed session 11. Feb 13 20:15:51.316320 sshd[5230]: Accepted publickey for core from 147.75.109.163 port 52936 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:51.318167 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:51.325405 systemd-logind[1449]: New session 12 of user core. Feb 13 20:15:51.333671 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:15:51.569102 sshd[5230]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:51.580715 systemd[1]: sshd@11-146.190.40.231:22-147.75.109.163:52936.service: Deactivated successfully. Feb 13 20:15:51.588745 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:15:51.595322 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:15:51.605063 systemd[1]: Started sshd@12-146.190.40.231:22-147.75.109.163:52940.service - OpenSSH per-connection server daemon (147.75.109.163:52940). Feb 13 20:15:51.610304 systemd-logind[1449]: Removed session 12. Feb 13 20:15:51.684044 sshd[5240]: Accepted publickey for core from 147.75.109.163 port 52940 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:51.686262 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:51.692630 systemd-logind[1449]: New session 13 of user core. Feb 13 20:15:51.698485 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:15:51.875971 sshd[5240]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:51.882003 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:15:51.883040 systemd[1]: sshd@12-146.190.40.231:22-147.75.109.163:52940.service: Deactivated successfully. Feb 13 20:15:51.887016 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:15:51.891411 systemd-logind[1449]: Removed session 13. Feb 13 20:15:56.899626 systemd[1]: Started sshd@13-146.190.40.231:22-147.75.109.163:52946.service - OpenSSH per-connection server daemon (147.75.109.163:52946). Feb 13 20:15:56.956885 sshd[5263]: Accepted publickey for core from 147.75.109.163 port 52946 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:56.959135 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:56.964388 systemd-logind[1449]: New session 14 of user core. Feb 13 20:15:56.970520 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:15:57.132566 sshd[5263]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:57.138627 systemd[1]: sshd@13-146.190.40.231:22-147.75.109.163:52946.service: Deactivated successfully. Feb 13 20:15:57.141648 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:15:57.143640 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:15:57.145088 systemd-logind[1449]: Removed session 14. Feb 13 20:15:58.212406 systemd[1]: run-containerd-runc-k8s.io-72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2-runc.fJo3UM.mount: Deactivated successfully. Feb 13 20:15:58.704282 kubelet[2487]: E0213 20:15:58.703913 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:02.177934 systemd[1]: Started sshd@14-146.190.40.231:22-147.75.109.163:58850.service - OpenSSH per-connection server daemon (147.75.109.163:58850). Feb 13 20:16:02.311753 sshd[5296]: Accepted publickey for core from 147.75.109.163 port 58850 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:02.314802 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:02.326812 systemd-logind[1449]: New session 15 of user core. Feb 13 20:16:02.336831 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:16:02.640188 sshd[5296]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:02.652947 systemd[1]: sshd@14-146.190.40.231:22-147.75.109.163:58850.service: Deactivated successfully. Feb 13 20:16:02.656732 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:16:02.658494 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:16:02.660634 systemd-logind[1449]: Removed session 15. Feb 13 20:16:07.668562 systemd[1]: Started sshd@15-146.190.40.231:22-147.75.109.163:58862.service - OpenSSH per-connection server daemon (147.75.109.163:58862). Feb 13 20:16:07.702497 kubelet[2487]: E0213 20:16:07.702015 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:07.721642 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 58862 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:07.723656 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:07.729159 systemd-logind[1449]: New session 16 of user core. Feb 13 20:16:07.735614 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:16:07.898106 sshd[5311]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:07.904082 systemd[1]: sshd@15-146.190.40.231:22-147.75.109.163:58862.service: Deactivated successfully. Feb 13 20:16:07.907885 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:16:07.909487 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:16:07.910986 systemd-logind[1449]: Removed session 16. Feb 13 20:16:08.160858 systemd[1]: run-containerd-runc-k8s.io-72c53cff0f4b90d1f7b9645864c5e564438708e2ad615604540eba60a0d6b2c2-runc.V3pZ6P.mount: Deactivated successfully. Feb 13 20:16:12.917618 systemd[1]: Started sshd@16-146.190.40.231:22-147.75.109.163:36962.service - OpenSSH per-connection server daemon (147.75.109.163:36962). Feb 13 20:16:12.973330 sshd[5344]: Accepted publickey for core from 147.75.109.163 port 36962 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:12.975823 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:12.982424 systemd-logind[1449]: New session 17 of user core. Feb 13 20:16:12.986587 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:16:13.142392 sshd[5344]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:13.148601 systemd[1]: sshd@16-146.190.40.231:22-147.75.109.163:36962.service: Deactivated successfully. Feb 13 20:16:13.151909 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:16:13.153722 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:16:13.155729 systemd-logind[1449]: Removed session 17. Feb 13 20:16:18.163663 systemd[1]: Started sshd@17-146.190.40.231:22-147.75.109.163:36972.service - OpenSSH per-connection server daemon (147.75.109.163:36972). Feb 13 20:16:18.217277 sshd[5382]: Accepted publickey for core from 147.75.109.163 port 36972 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:18.220404 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:18.226248 systemd-logind[1449]: New session 18 of user core. Feb 13 20:16:18.228454 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:16:18.378588 sshd[5382]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:18.390092 systemd[1]: sshd@17-146.190.40.231:22-147.75.109.163:36972.service: Deactivated successfully. Feb 13 20:16:18.393211 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:16:18.396336 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:16:18.407698 systemd[1]: Started sshd@18-146.190.40.231:22-147.75.109.163:36976.service - OpenSSH per-connection server daemon (147.75.109.163:36976). Feb 13 20:16:18.410484 systemd-logind[1449]: Removed session 18. Feb 13 20:16:18.457458 sshd[5394]: Accepted publickey for core from 147.75.109.163 port 36976 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:18.460005 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:18.467386 systemd-logind[1449]: New session 19 of user core. Feb 13 20:16:18.473530 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:16:18.890025 sshd[5394]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:18.893963 systemd[1]: Started sshd@19-146.190.40.231:22-147.75.109.163:36988.service - OpenSSH per-connection server daemon (147.75.109.163:36988). Feb 13 20:16:18.898046 systemd[1]: sshd@18-146.190.40.231:22-147.75.109.163:36976.service: Deactivated successfully. Feb 13 20:16:18.902094 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:16:18.903946 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:16:18.907970 systemd-logind[1449]: Removed session 19. Feb 13 20:16:18.972479 sshd[5403]: Accepted publickey for core from 147.75.109.163 port 36988 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:18.975800 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:18.983628 systemd-logind[1449]: New session 20 of user core. Feb 13 20:16:18.987956 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:16:20.116507 sshd[5403]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:20.141412 systemd[1]: Started sshd@20-146.190.40.231:22-147.75.109.163:36382.service - OpenSSH per-connection server daemon (147.75.109.163:36382). Feb 13 20:16:20.143832 systemd[1]: sshd@19-146.190.40.231:22-147.75.109.163:36988.service: Deactivated successfully. Feb 13 20:16:20.153951 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:16:20.161480 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:16:20.167688 systemd-logind[1449]: Removed session 20. Feb 13 20:16:20.222216 sshd[5421]: Accepted publickey for core from 147.75.109.163 port 36382 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:20.224435 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:20.232488 systemd-logind[1449]: New session 21 of user core. Feb 13 20:16:20.237554 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:16:20.684216 sshd[5421]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:20.694872 systemd[1]: sshd@20-146.190.40.231:22-147.75.109.163:36382.service: Deactivated successfully. Feb 13 20:16:20.701492 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:16:20.706761 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:16:20.708856 kubelet[2487]: E0213 20:16:20.708812 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:20.716246 systemd[1]: Started sshd@21-146.190.40.231:22-147.75.109.163:36390.service - OpenSSH per-connection server daemon (147.75.109.163:36390). Feb 13 20:16:20.720903 systemd-logind[1449]: Removed session 21. Feb 13 20:16:20.783439 sshd[5433]: Accepted publickey for core from 147.75.109.163 port 36390 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:20.784314 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:20.790970 systemd-logind[1449]: New session 22 of user core. Feb 13 20:16:20.798129 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:16:20.969121 sshd[5433]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:20.973996 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:16:20.977824 systemd[1]: sshd@21-146.190.40.231:22-147.75.109.163:36390.service: Deactivated successfully. Feb 13 20:16:20.981108 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:16:20.983385 systemd-logind[1449]: Removed session 22. Feb 13 20:16:25.989877 systemd[1]: Started sshd@22-146.190.40.231:22-147.75.109.163:36402.service - OpenSSH per-connection server daemon (147.75.109.163:36402). Feb 13 20:16:26.035911 sshd[5445]: Accepted publickey for core from 147.75.109.163 port 36402 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:26.038838 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:26.045966 systemd-logind[1449]: New session 23 of user core. Feb 13 20:16:26.055599 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:16:26.199389 sshd[5445]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:26.203936 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:16:26.206591 systemd[1]: sshd@22-146.190.40.231:22-147.75.109.163:36402.service: Deactivated successfully. Feb 13 20:16:26.209980 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:16:26.212135 systemd-logind[1449]: Removed session 23. Feb 13 20:16:31.221741 systemd[1]: Started sshd@23-146.190.40.231:22-147.75.109.163:41046.service - OpenSSH per-connection server daemon (147.75.109.163:41046). Feb 13 20:16:31.324544 sshd[5478]: Accepted publickey for core from 147.75.109.163 port 41046 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:31.327606 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:31.339454 systemd-logind[1449]: New session 24 of user core. Feb 13 20:16:31.345830 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:16:31.582169 sshd[5478]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:31.589454 systemd[1]: sshd@23-146.190.40.231:22-147.75.109.163:41046.service: Deactivated successfully. Feb 13 20:16:31.592988 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:16:31.594395 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:16:31.595796 systemd-logind[1449]: Removed session 24. Feb 13 20:16:31.702770 kubelet[2487]: E0213 20:16:31.702699 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:36.607585 systemd[1]: Started sshd@24-146.190.40.231:22-147.75.109.163:41060.service - OpenSSH per-connection server daemon (147.75.109.163:41060). Feb 13 20:16:36.681271 sshd[5496]: Accepted publickey for core from 147.75.109.163 port 41060 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:36.683144 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:36.688308 systemd-logind[1449]: New session 25 of user core. Feb 13 20:16:36.696485 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:16:36.921044 sshd[5496]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:36.926359 systemd[1]: sshd@24-146.190.40.231:22-147.75.109.163:41060.service: Deactivated successfully. Feb 13 20:16:36.929966 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:16:36.931479 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:16:36.935168 systemd-logind[1449]: Removed session 25. Feb 13 20:16:41.943594 systemd[1]: Started sshd@25-146.190.40.231:22-147.75.109.163:56164.service - OpenSSH per-connection server daemon (147.75.109.163:56164). Feb 13 20:16:42.001735 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 56164 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:42.005418 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:42.011800 systemd-logind[1449]: New session 26 of user core. Feb 13 20:16:42.017504 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:16:42.171150 sshd[5510]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:42.175620 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:16:42.176435 systemd[1]: sshd@25-146.190.40.231:22-147.75.109.163:56164.service: Deactivated successfully. Feb 13 20:16:42.180320 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:16:42.182047 systemd-logind[1449]: Removed session 26. Feb 13 20:16:43.257565 systemd[1]: run-containerd-runc-k8s.io-810b1f770fde300dc0387f1c7052bd9604986fb3d505efb256e426a82199e197-runc.u6vYh0.mount: Deactivated successfully.