Feb 13 20:18:50.958949 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:18:50.958985 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:18:50.959000 kernel: BIOS-provided physical RAM map: Feb 13 20:18:50.959007 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:18:50.959014 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:18:50.959020 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:18:50.959028 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 13 20:18:50.959035 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 13 20:18:50.959042 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:18:50.959052 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:18:50.959059 kernel: NX (Execute Disable) protection: active Feb 13 20:18:50.959066 kernel: APIC: Static calls initialized Feb 13 20:18:50.959073 kernel: SMBIOS 2.8 present. Feb 13 20:18:50.959081 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:18:50.959089 kernel: Hypervisor detected: KVM Feb 13 20:18:50.959103 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:18:50.959115 kernel: kvm-clock: using sched offset of 3054774705 cycles Feb 13 20:18:50.959124 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:18:50.959136 kernel: tsc: Detected 2494.138 MHz processor Feb 13 20:18:50.959145 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:18:50.959154 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:18:50.959161 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 13 20:18:50.959170 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:18:50.959178 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:18:50.959189 kernel: ACPI: Early table checksum verification disabled Feb 13 20:18:50.959197 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 13 20:18:50.959205 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959213 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959222 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959230 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:18:50.959237 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959245 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959253 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959264 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:18:50.959272 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:18:50.959280 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:18:50.959287 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:18:50.959295 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:18:50.959303 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:18:50.959311 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:18:50.959325 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:18:50.959334 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:18:50.959342 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:18:50.959351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:18:50.959359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:18:50.959368 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 13 20:18:50.959376 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 13 20:18:50.959388 kernel: Zone ranges: Feb 13 20:18:50.959396 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:18:50.959404 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 13 20:18:50.959412 kernel: Normal empty Feb 13 20:18:50.959421 kernel: Movable zone start for each node Feb 13 20:18:50.959429 kernel: Early memory node ranges Feb 13 20:18:50.959437 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:18:50.959445 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 13 20:18:50.959453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 13 20:18:50.959465 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:18:50.959473 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:18:50.959482 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 13 20:18:50.959490 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:18:50.959499 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:18:50.959507 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:18:50.959516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:18:50.959524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:18:50.959533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:18:50.959544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:18:50.959552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:18:50.959561 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:18:50.959569 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:18:50.959577 kernel: TSC deadline timer available Feb 13 20:18:50.959586 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:18:50.959594 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:18:50.959602 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:18:50.959611 kernel: Booting paravirtualized kernel on KVM Feb 13 20:18:50.959622 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:18:50.959631 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:18:50.959639 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:18:50.959647 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:18:50.959655 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:18:50.959664 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:18:50.959673 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:18:50.959682 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:18:50.959694 kernel: random: crng init done Feb 13 20:18:50.959702 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:18:50.959711 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:18:50.959719 kernel: Fallback order for Node 0: 0 Feb 13 20:18:50.959728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 13 20:18:50.959736 kernel: Policy zone: DMA32 Feb 13 20:18:50.959744 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:18:50.959753 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:18:50.959761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:18:50.959772 kernel: Kernel/User page tables isolation: enabled Feb 13 20:18:50.959781 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:18:50.959789 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:18:50.959798 kernel: Dynamic Preempt: voluntary Feb 13 20:18:50.959806 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:18:50.959838 kernel: rcu: RCU event tracing is enabled. Feb 13 20:18:50.959847 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:18:50.959856 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:18:50.959864 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:18:50.959876 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:18:50.959884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:18:50.959893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:18:50.959901 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:18:50.959910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:18:50.959918 kernel: Console: colour VGA+ 80x25 Feb 13 20:18:50.959927 kernel: printk: console [tty0] enabled Feb 13 20:18:50.959935 kernel: printk: console [ttyS0] enabled Feb 13 20:18:50.959943 kernel: ACPI: Core revision 20230628 Feb 13 20:18:50.959952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:18:50.959963 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:18:50.959972 kernel: x2apic enabled Feb 13 20:18:50.959980 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:18:50.959988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:18:50.959997 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 20:18:50.960005 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Feb 13 20:18:50.960014 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:18:50.960022 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:18:50.960043 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:18:50.960052 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:18:50.960060 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:18:50.960072 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:18:50.960081 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:18:50.960090 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:18:50.960099 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:18:50.960107 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:18:50.960116 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:18:50.960129 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:18:50.960138 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:18:50.960147 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:18:50.960156 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:18:50.960165 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:18:50.960174 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:18:50.960183 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:18:50.960192 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:18:50.960203 kernel: landlock: Up and running. Feb 13 20:18:50.960212 kernel: SELinux: Initializing. Feb 13 20:18:50.960221 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:18:50.960230 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:18:50.960239 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:18:50.960248 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:18:50.960257 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:18:50.960266 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:18:50.960278 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:18:50.960286 kernel: signal: max sigframe size: 1776 Feb 13 20:18:50.960296 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:18:50.960305 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:18:50.960313 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:18:50.960322 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:18:50.960331 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:18:50.960340 kernel: .... node #0, CPUs: #1 Feb 13 20:18:50.960349 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:18:50.960361 kernel: smpboot: Max logical packages: 1 Feb 13 20:18:50.960369 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Feb 13 20:18:50.960378 kernel: devtmpfs: initialized Feb 13 20:18:50.960387 kernel: x86/mm: Memory block size: 128MB Feb 13 20:18:50.960397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:18:50.960406 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:18:50.960414 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:18:50.960434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:18:50.960443 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:18:50.960452 kernel: audit: type=2000 audit(1739477929.987:1): state=initialized audit_enabled=0 res=1 Feb 13 20:18:50.960465 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:18:50.960473 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:18:50.960482 kernel: cpuidle: using governor menu Feb 13 20:18:50.960505 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:18:50.960518 kernel: dca service started, version 1.12.1 Feb 13 20:18:50.960531 kernel: PCI: Using configuration type 1 for base access Feb 13 20:18:50.960543 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:18:50.960556 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:18:50.960576 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:18:50.960590 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:18:50.960599 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:18:50.960608 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:18:50.960617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:18:50.960626 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:18:50.960635 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:18:50.960644 kernel: ACPI: Interpreter enabled Feb 13 20:18:50.960652 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:18:50.960661 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:18:50.960673 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:18:50.960682 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:18:50.960691 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:18:50.960700 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:18:50.961994 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:18:50.962178 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:18:50.962355 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:18:50.962380 kernel: acpiphp: Slot [3] registered Feb 13 20:18:50.962389 kernel: acpiphp: Slot [4] registered Feb 13 20:18:50.962399 kernel: acpiphp: Slot [5] registered Feb 13 20:18:50.962408 kernel: acpiphp: Slot [6] registered Feb 13 20:18:50.962417 kernel: acpiphp: Slot [7] registered Feb 13 20:18:50.962426 kernel: acpiphp: Slot [8] registered Feb 13 20:18:50.962435 kernel: acpiphp: Slot [9] registered Feb 13 20:18:50.962444 kernel: acpiphp: Slot [10] registered Feb 13 20:18:50.962453 kernel: acpiphp: Slot [11] registered Feb 13 20:18:50.962474 kernel: acpiphp: Slot [12] registered Feb 13 20:18:50.962484 kernel: acpiphp: Slot [13] registered Feb 13 20:18:50.962493 kernel: acpiphp: Slot [14] registered Feb 13 20:18:50.962502 kernel: acpiphp: Slot [15] registered Feb 13 20:18:50.962511 kernel: acpiphp: Slot [16] registered Feb 13 20:18:50.962519 kernel: acpiphp: Slot [17] registered Feb 13 20:18:50.962528 kernel: acpiphp: Slot [18] registered Feb 13 20:18:50.962537 kernel: acpiphp: Slot [19] registered Feb 13 20:18:50.962546 kernel: acpiphp: Slot [20] registered Feb 13 20:18:50.962555 kernel: acpiphp: Slot [21] registered Feb 13 20:18:50.962568 kernel: acpiphp: Slot [22] registered Feb 13 20:18:50.962577 kernel: acpiphp: Slot [23] registered Feb 13 20:18:50.962586 kernel: acpiphp: Slot [24] registered Feb 13 20:18:50.962595 kernel: acpiphp: Slot [25] registered Feb 13 20:18:50.962604 kernel: acpiphp: Slot [26] registered Feb 13 20:18:50.962612 kernel: acpiphp: Slot [27] registered Feb 13 20:18:50.962624 kernel: acpiphp: Slot [28] registered Feb 13 20:18:50.962639 kernel: acpiphp: Slot [29] registered Feb 13 20:18:50.962652 kernel: acpiphp: Slot [30] registered Feb 13 20:18:50.962670 kernel: acpiphp: Slot [31] registered Feb 13 20:18:50.962684 kernel: PCI host bridge to bus 0000:00 Feb 13 20:18:50.964843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:18:50.965038 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:18:50.965154 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:18:50.965307 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:18:50.965446 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:18:50.965596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:18:50.965768 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:18:50.965958 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:18:50.966162 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:18:50.966314 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:18:50.966462 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:18:50.966661 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:18:50.966764 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:18:50.968970 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:18:50.969135 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:18:50.969239 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:18:50.969375 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:18:50.969526 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:18:50.969665 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:18:50.969951 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:18:50.970100 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:18:50.970242 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:18:50.970389 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:18:50.970532 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:18:50.970681 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:18:50.972960 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:18:50.973183 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:18:50.973350 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:18:50.973530 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:18:50.973711 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:18:50.973929 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:18:50.974064 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:18:50.974221 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:18:50.974423 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:18:50.974585 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:18:50.974749 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:18:50.974947 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:18:50.975132 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:18:50.975268 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:18:50.975367 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:18:50.975465 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:18:50.975661 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:18:50.977922 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:18:50.978238 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:18:50.978425 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:18:50.978607 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:18:50.978801 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:18:50.979020 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:18:50.979045 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:18:50.979063 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:18:50.979080 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:18:50.979099 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:18:50.979128 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:18:50.979146 kernel: iommu: Default domain type: Translated Feb 13 20:18:50.979163 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:18:50.979177 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:18:50.979191 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:18:50.979205 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:18:50.979218 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 13 20:18:50.979390 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:18:50.979554 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:18:50.979729 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:18:50.979754 kernel: vgaarb: loaded Feb 13 20:18:50.979770 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:18:50.979784 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:18:50.979799 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:18:50.981874 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:18:50.981923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:18:50.981942 kernel: pnp: PnP ACPI init Feb 13 20:18:50.981959 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:18:50.981992 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:18:50.982009 kernel: NET: Registered PF_INET protocol family Feb 13 20:18:50.982025 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:18:50.982042 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:18:50.982059 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:18:50.982076 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:18:50.982093 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:18:50.982110 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:18:50.982127 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:18:50.982148 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:18:50.982163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:18:50.982178 kernel: NET: Registered PF_XDP protocol family Feb 13 20:18:50.982431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:18:50.982576 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:18:50.982735 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:18:50.983959 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:18:50.984107 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:18:50.984249 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:18:50.984376 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:18:50.984392 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:18:50.984514 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37334 usecs Feb 13 20:18:50.984533 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:18:50.984547 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:18:50.984561 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 20:18:50.984575 kernel: Initialise system trusted keyrings Feb 13 20:18:50.984599 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:18:50.984613 kernel: Key type asymmetric registered Feb 13 20:18:50.984626 kernel: Asymmetric key parser 'x509' registered Feb 13 20:18:50.984636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:18:50.984645 kernel: io scheduler mq-deadline registered Feb 13 20:18:50.984655 kernel: io scheduler kyber registered Feb 13 20:18:50.984669 kernel: io scheduler bfq registered Feb 13 20:18:50.984681 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:18:50.984691 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:18:50.984705 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:18:50.984714 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:18:50.984724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:18:50.984738 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:18:50.984752 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:18:50.984761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:18:50.984770 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:18:50.984780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:18:50.986035 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:18:50.986159 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:18:50.986252 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:18:50 UTC (1739477930) Feb 13 20:18:50.986342 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:18:50.986367 kernel: intel_pstate: CPU model not supported Feb 13 20:18:50.986378 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:18:50.986388 kernel: Segment Routing with IPv6 Feb 13 20:18:50.986397 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:18:50.986423 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:18:50.986436 kernel: Key type dns_resolver registered Feb 13 20:18:50.986446 kernel: IPI shorthand broadcast: enabled Feb 13 20:18:50.986456 kernel: sched_clock: Marking stable (1017006683, 103621934)->(1152199511, -31570894) Feb 13 20:18:50.986466 kernel: registered taskstats version 1 Feb 13 20:18:50.986475 kernel: Loading compiled-in X.509 certificates Feb 13 20:18:50.986485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:18:50.986494 kernel: Key type .fscrypt registered Feb 13 20:18:50.986503 kernel: Key type fscrypt-provisioning registered Feb 13 20:18:50.986513 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:18:50.986525 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:18:50.986535 kernel: ima: No architecture policies found Feb 13 20:18:50.986544 kernel: clk: Disabling unused clocks Feb 13 20:18:50.986554 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:18:50.986563 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:18:50.986594 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:18:50.986606 kernel: Run /init as init process Feb 13 20:18:50.986617 kernel: with arguments: Feb 13 20:18:50.986627 kernel: /init Feb 13 20:18:50.986639 kernel: with environment: Feb 13 20:18:50.986649 kernel: HOME=/ Feb 13 20:18:50.986659 kernel: TERM=linux Feb 13 20:18:50.986669 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:18:50.986681 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:18:50.986694 systemd[1]: Detected virtualization kvm. Feb 13 20:18:50.986704 systemd[1]: Detected architecture x86-64. Feb 13 20:18:50.986717 systemd[1]: Running in initrd. Feb 13 20:18:50.986727 systemd[1]: No hostname configured, using default hostname. Feb 13 20:18:50.986737 systemd[1]: Hostname set to . Feb 13 20:18:50.986748 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:18:50.986759 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:18:50.986769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:18:50.986779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:18:50.986791 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:18:50.986805 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:18:50.986815 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:18:50.986826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:18:50.986837 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:18:50.986876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:18:50.986886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:18:50.986897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:18:50.986911 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:18:50.986922 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:18:50.986932 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:18:50.986946 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:18:50.986956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:18:50.986967 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:18:50.986980 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:18:50.986991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:18:50.987001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:18:50.987012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:18:50.987023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:18:50.987033 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:18:50.987043 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:18:50.987054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:18:50.987067 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:18:50.987078 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:18:50.987088 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:18:50.987099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:18:50.987110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:18:50.987123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:18:50.987133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:18:50.987144 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:18:50.987196 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 20:18:50.987226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:18:50.987239 systemd-journald[184]: Journal started Feb 13 20:18:50.987265 systemd-journald[184]: Runtime Journal (/run/log/journal/b7283ebdaa7845e98f49a21947434e0c) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:18:50.962521 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 20:18:51.016937 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:18:51.016978 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:18:51.016994 kernel: Bridge firewalling registered Feb 13 20:18:51.002900 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 20:18:51.024730 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:18:51.030967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:51.033561 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:18:51.049247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:18:51.051515 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:18:51.063163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:18:51.067126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:18:51.088224 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:18:51.089530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:18:51.090580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:18:51.104123 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:18:51.105024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:18:51.126374 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:18:51.140788 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:18:51.146585 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:18:51.173276 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:18:51.174100 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:18:51.174142 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:18:51.179652 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:18:51.182143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:18:51.182771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:18:51.265871 kernel: SCSI subsystem initialized Feb 13 20:18:51.277864 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:18:51.289870 kernel: iscsi: registered transport (tcp) Feb 13 20:18:51.316906 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:18:51.317056 kernel: QLogic iSCSI HBA Driver Feb 13 20:18:51.388174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:18:51.398190 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:18:51.428366 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:18:51.429986 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:18:51.430013 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:18:51.478887 kernel: raid6: avx2x4 gen() 16394 MB/s Feb 13 20:18:51.495864 kernel: raid6: avx2x2 gen() 14267 MB/s Feb 13 20:18:51.513332 kernel: raid6: avx2x1 gen() 12754 MB/s Feb 13 20:18:51.513406 kernel: raid6: using algorithm avx2x4 gen() 16394 MB/s Feb 13 20:18:51.531187 kernel: raid6: .... xor() 6682 MB/s, rmw enabled Feb 13 20:18:51.531273 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:18:51.553865 kernel: xor: automatically using best checksumming function avx Feb 13 20:18:51.736863 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:18:51.750217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:18:51.758081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:18:51.785911 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 20:18:51.792360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:18:51.802361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:18:51.817629 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:18:51.854581 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:18:51.861103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:18:51.922053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:18:51.932059 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:18:51.952432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:18:51.959042 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:18:51.960919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:18:51.961838 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:18:51.968044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:18:51.984322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:18:52.013357 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:18:52.085337 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:18:52.085560 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:18:52.085583 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:18:52.085601 kernel: GPT:9289727 != 125829119 Feb 13 20:18:52.085619 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:18:52.085635 kernel: GPT:9289727 != 125829119 Feb 13 20:18:52.085647 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:18:52.085667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:18:52.085683 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:18:52.086017 kernel: ACPI: bus type USB registered Feb 13 20:18:52.086039 kernel: usbcore: registered new interface driver usbfs Feb 13 20:18:52.091003 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:18:52.114790 kernel: usbcore: registered new interface driver hub Feb 13 20:18:52.114810 kernel: usbcore: registered new device driver usb Feb 13 20:18:52.114837 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Feb 13 20:18:52.097185 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:18:52.097606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:18:52.123084 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:18:52.123111 kernel: AES CTR mode by8 optimization enabled Feb 13 20:18:52.123123 kernel: libata version 3.00 loaded. Feb 13 20:18:52.098596 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:18:52.099311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:18:52.099371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:52.129937 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:18:52.139311 kernel: scsi host1: ata_piix Feb 13 20:18:52.139486 kernel: scsi host2: ata_piix Feb 13 20:18:52.139603 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:18:52.139618 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:18:52.104881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:18:52.112017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:18:52.168858 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Feb 13 20:18:52.173876 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Feb 13 20:18:52.187827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:18:52.229300 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:18:52.229548 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:18:52.229678 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:18:52.229793 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:18:52.230168 kernel: hub 1-0:1.0: USB hub found Feb 13 20:18:52.230314 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:18:52.234557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:52.240124 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:18:52.244267 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:18:52.245265 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:18:52.250994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:18:52.263297 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:18:52.267167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:18:52.279268 disk-uuid[531]: Primary Header is updated. Feb 13 20:18:52.279268 disk-uuid[531]: Secondary Entries is updated. Feb 13 20:18:52.279268 disk-uuid[531]: Secondary Header is updated. Feb 13 20:18:52.286461 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:18:52.303885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:18:52.329251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:18:53.296890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:18:53.297454 disk-uuid[532]: The operation has completed successfully. Feb 13 20:18:53.346353 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:18:53.346536 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:18:53.358181 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:18:53.379006 sh[560]: Success Feb 13 20:18:53.394854 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:18:53.463880 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:18:53.475154 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:18:53.480913 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:18:53.510888 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:18:53.510960 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:18:53.510974 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:18:53.512216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:18:53.513048 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:18:53.524192 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:18:53.525475 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:18:53.541143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:18:53.545240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:18:53.557074 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:18:53.557140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:18:53.557154 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:18:53.561841 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:18:53.577851 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:18:53.577987 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:18:53.587035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:18:53.593256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:18:53.687870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:18:53.697119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:18:53.722737 systemd-networkd[745]: lo: Link UP Feb 13 20:18:53.723720 systemd-networkd[745]: lo: Gained carrier Feb 13 20:18:53.726648 systemd-networkd[745]: Enumeration completed Feb 13 20:18:53.727033 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:18:53.727467 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:18:53.727472 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:18:53.732505 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:18:53.732512 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:18:53.738230 systemd-networkd[745]: eth0: Link UP Feb 13 20:18:53.738235 systemd-networkd[745]: eth0: Gained carrier Feb 13 20:18:53.738247 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:18:53.739468 systemd[1]: Reached target network.target - Network. Feb 13 20:18:53.743268 systemd-networkd[745]: eth1: Link UP Feb 13 20:18:53.743274 systemd-networkd[745]: eth1: Gained carrier Feb 13 20:18:53.743288 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:18:53.760937 systemd-networkd[745]: eth0: DHCPv4 address 165.232.153.54/20, gateway 165.232.144.1 acquired from 169.254.169.253 Feb 13 20:18:53.765946 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Feb 13 20:18:53.766804 ignition[646]: Ignition 2.19.0 Feb 13 20:18:53.766827 ignition[646]: Stage: fetch-offline Feb 13 20:18:53.766884 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:53.766894 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:53.768894 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:18:53.766995 ignition[646]: parsed url from cmdline: "" Feb 13 20:18:53.767002 ignition[646]: no config URL provided Feb 13 20:18:53.767008 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:18:53.767015 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:18:53.767022 ignition[646]: failed to fetch config: resource requires networking Feb 13 20:18:53.767234 ignition[646]: Ignition finished successfully Feb 13 20:18:53.775670 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:18:53.797117 ignition[754]: Ignition 2.19.0 Feb 13 20:18:53.797131 ignition[754]: Stage: fetch Feb 13 20:18:53.797329 ignition[754]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:53.797345 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:53.797537 ignition[754]: parsed url from cmdline: "" Feb 13 20:18:53.797541 ignition[754]: no config URL provided Feb 13 20:18:53.797547 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:18:53.797557 ignition[754]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:18:53.797576 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:18:53.825804 ignition[754]: GET result: OK Feb 13 20:18:53.826770 ignition[754]: parsing config with SHA512: dd9530ef6f8ead0b9780213e0db2be18efdd580a46d6c0387ecd80435d354ecfd2d4175aa057c24903f0367636ce1b0092a519e1555b3a8542bf5d3f2103d42f Feb 13 20:18:53.831288 unknown[754]: fetched base config from "system" Feb 13 20:18:53.831306 unknown[754]: fetched base config from "system" Feb 13 20:18:53.831778 ignition[754]: fetch: fetch complete Feb 13 20:18:53.831313 unknown[754]: fetched user config from "digitalocean" Feb 13 20:18:53.831788 ignition[754]: fetch: fetch passed Feb 13 20:18:53.831857 ignition[754]: Ignition finished successfully Feb 13 20:18:53.833871 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:18:53.840215 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:18:53.863602 ignition[761]: Ignition 2.19.0 Feb 13 20:18:53.863618 ignition[761]: Stage: kargs Feb 13 20:18:53.865449 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:53.865480 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:53.866764 ignition[761]: kargs: kargs passed Feb 13 20:18:53.868287 ignition[761]: Ignition finished successfully Feb 13 20:18:53.869903 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:18:53.876058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:18:53.896196 ignition[767]: Ignition 2.19.0 Feb 13 20:18:53.896213 ignition[767]: Stage: disks Feb 13 20:18:53.896392 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:53.896403 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:53.899124 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:18:53.897473 ignition[767]: disks: disks passed Feb 13 20:18:53.900621 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:18:53.897532 ignition[767]: Ignition finished successfully Feb 13 20:18:53.904679 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:18:53.905507 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:18:53.906147 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:18:53.907386 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:18:53.916097 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:18:53.932360 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:18:53.935077 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:18:53.943217 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:18:54.049872 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:18:54.050706 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:18:54.052452 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:18:54.070004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:18:54.072842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:18:54.075085 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:18:54.082812 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:18:54.085341 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Feb 13 20:18:54.085553 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:18:54.086443 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:18:54.088872 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:18:54.093856 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:18:54.093916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:18:54.093930 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:18:54.097296 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:18:54.100935 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:18:54.106364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:18:54.197894 coreos-metadata[786]: Feb 13 20:18:54.197 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:18:54.200929 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:18:54.207185 coreos-metadata[785]: Feb 13 20:18:54.207 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:18:54.209741 coreos-metadata[786]: Feb 13 20:18:54.209 INFO Fetch successful Feb 13 20:18:54.212907 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:18:54.215241 coreos-metadata[786]: Feb 13 20:18:54.215 INFO wrote hostname ci-4081.3.1-6-670b8c47e7 to /sysroot/etc/hostname Feb 13 20:18:54.216120 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:18:54.220253 coreos-metadata[785]: Feb 13 20:18:54.218 INFO Fetch successful Feb 13 20:18:54.227219 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:18:54.228085 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:18:54.228240 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:18:54.236054 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:18:54.363718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:18:54.370041 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:18:54.380126 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:18:54.392856 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:18:54.413562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:18:54.442301 ignition[904]: INFO : Ignition 2.19.0 Feb 13 20:18:54.443367 ignition[904]: INFO : Stage: mount Feb 13 20:18:54.443367 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:54.443367 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:54.445472 ignition[904]: INFO : mount: mount passed Feb 13 20:18:54.445472 ignition[904]: INFO : Ignition finished successfully Feb 13 20:18:54.445782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:18:54.452007 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:18:54.509468 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:18:54.520173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:18:54.531110 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (915) Feb 13 20:18:54.531202 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:18:54.533231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:18:54.533301 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:18:54.537864 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:18:54.540665 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:18:54.577835 ignition[932]: INFO : Ignition 2.19.0 Feb 13 20:18:54.580028 ignition[932]: INFO : Stage: files Feb 13 20:18:54.580028 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:54.580028 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:54.582195 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:18:54.582195 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:18:54.582195 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:18:54.585656 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:18:54.586618 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:18:54.586618 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:18:54.586563 unknown[932]: wrote ssh authorized keys file for user: core Feb 13 20:18:54.589680 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:18:54.589680 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:18:54.627771 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:18:54.770312 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:18:54.770312 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:18:54.772992 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:18:55.099322 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:18:55.283041 systemd-networkd[745]: eth1: Gained IPv6LL Feb 13 20:18:55.341688 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:18:55.341688 ignition[932]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:18:55.351811 ignition[932]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:18:55.353149 ignition[932]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:18:55.353149 ignition[932]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:18:55.353149 ignition[932]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:18:55.356669 ignition[932]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:18:55.356669 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:18:55.356669 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:18:55.356669 ignition[932]: INFO : files: files passed Feb 13 20:18:55.356669 ignition[932]: INFO : Ignition finished successfully Feb 13 20:18:55.355327 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:18:55.362295 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:18:55.366027 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:18:55.376008 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:18:55.376180 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:18:55.387072 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:18:55.387072 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:18:55.389017 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:18:55.389338 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:18:55.390697 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:18:55.405045 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:18:55.445072 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:18:55.445279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:18:55.446740 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:18:55.447773 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:18:55.449117 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:18:55.450458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:18:55.473280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:18:55.477045 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:18:55.495925 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:18:55.496718 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:18:55.497947 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:18:55.498792 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:18:55.499061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:18:55.500473 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:18:55.501206 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:18:55.502105 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:18:55.503122 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:18:55.503993 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:18:55.505078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:18:55.506019 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:18:55.506881 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:18:55.507712 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:18:55.508463 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:18:55.509225 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:18:55.509360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:18:55.510250 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:18:55.511171 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:18:55.511915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:18:55.512075 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:18:55.512853 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:18:55.513012 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:18:55.514106 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:18:55.514285 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:18:55.515201 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:18:55.515363 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:18:55.515979 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:18:55.516093 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:18:55.523092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:18:55.526115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:18:55.529085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:18:55.530013 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:18:55.531607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:18:55.531731 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:18:55.540604 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:18:55.542304 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:18:55.543655 ignition[984]: INFO : Ignition 2.19.0 Feb 13 20:18:55.543655 ignition[984]: INFO : Stage: umount Feb 13 20:18:55.543655 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:18:55.543655 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:18:55.548042 ignition[984]: INFO : umount: umount passed Feb 13 20:18:55.548042 ignition[984]: INFO : Ignition finished successfully Feb 13 20:18:55.547702 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:18:55.547843 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:18:55.549083 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:18:55.549188 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:18:55.549722 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:18:55.549769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:18:55.550388 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:18:55.550429 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:18:55.552097 systemd[1]: Stopped target network.target - Network. Feb 13 20:18:55.552396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:18:55.552449 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:18:55.559992 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:18:55.560351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:18:55.560451 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:18:55.561531 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:18:55.562070 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:18:55.563155 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:18:55.563209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:18:55.564432 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:18:55.564478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:18:55.565985 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:18:55.566044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:18:55.566682 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:18:55.566736 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:18:55.567278 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:18:55.567787 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:18:55.571317 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:18:55.571924 systemd-networkd[745]: eth1: DHCPv6 lease lost Feb 13 20:18:55.573272 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:18:55.574198 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:18:55.575925 systemd-networkd[745]: eth0: DHCPv6 lease lost Feb 13 20:18:55.578052 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:18:55.578180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:18:55.579049 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:18:55.579151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:18:55.582188 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:18:55.582232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:18:55.583068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:18:55.583142 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:18:55.589063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:18:55.589541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:18:55.589623 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:18:55.591444 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:18:55.591508 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:18:55.593548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:18:55.593603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:18:55.594736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:18:55.594792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:18:55.595910 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:18:55.607424 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:18:55.607558 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:18:55.610248 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:18:55.610421 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:18:55.612026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:18:55.612102 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:18:55.613175 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:18:55.613215 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:18:55.613946 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:18:55.613999 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:18:55.615064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:18:55.615112 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:18:55.615863 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:18:55.615914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:18:55.632988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:18:55.633606 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:18:55.633724 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:18:55.634387 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:18:55.634484 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:18:55.635110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:18:55.635177 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:18:55.636387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:18:55.636450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:55.645010 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:18:55.645175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:18:55.647247 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:18:55.654117 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:18:55.665773 systemd[1]: Switching root. Feb 13 20:18:55.745055 systemd-journald[184]: Journal stopped Feb 13 20:18:56.913793 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 20:18:56.913907 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:18:56.913926 kernel: SELinux: policy capability open_perms=1 Feb 13 20:18:56.913944 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:18:56.913956 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:18:56.913969 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:18:56.913985 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:18:56.913997 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:18:56.914009 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:18:56.914021 kernel: audit: type=1403 audit(1739477935.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:18:56.914034 systemd[1]: Successfully loaded SELinux policy in 41.921ms. Feb 13 20:18:56.914057 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.177ms. Feb 13 20:18:56.914071 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:18:56.914084 systemd[1]: Detected virtualization kvm. Feb 13 20:18:56.914099 systemd[1]: Detected architecture x86-64. Feb 13 20:18:56.914111 systemd[1]: Detected first boot. Feb 13 20:18:56.914124 systemd[1]: Hostname set to . Feb 13 20:18:56.914136 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:18:56.914148 zram_generator::config[1027]: No configuration found. Feb 13 20:18:56.914164 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:18:56.914188 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:18:56.914201 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:18:56.914216 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:18:56.914231 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:18:56.914244 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:18:56.914277 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:18:56.914305 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:18:56.914318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:18:56.914331 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:18:56.914344 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:18:56.914356 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:18:56.914372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:18:56.914384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:18:56.914397 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:18:56.914413 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:18:56.914432 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:18:56.914450 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:18:56.914474 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:18:56.914489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:18:56.914501 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:18:56.914517 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:18:56.914530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:18:56.914543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:18:56.914555 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:18:56.914568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:18:56.914581 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:18:56.914604 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:18:56.914616 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:18:56.914674 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:18:56.914687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:18:56.914700 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:18:56.914712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:18:56.914730 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:18:56.914749 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:18:56.914766 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:18:56.914783 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:18:56.914795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:56.914813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:18:56.915876 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:18:56.915902 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:18:56.915921 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:18:56.915934 systemd[1]: Reached target machines.target - Containers. Feb 13 20:18:56.915947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:18:56.915966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:18:56.915979 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:18:56.915995 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:18:56.916013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:18:56.916047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:18:56.916060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:18:56.916072 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:18:56.916085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:18:56.916099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:18:56.916116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:18:56.916130 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:18:56.916142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:18:56.916155 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:18:56.916169 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:18:56.917154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:18:56.917172 kernel: fuse: init (API version 7.39) Feb 13 20:18:56.917186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:18:56.917199 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:18:56.917217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:18:56.917231 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:18:56.917244 systemd[1]: Stopped verity-setup.service. Feb 13 20:18:56.917257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:56.917269 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:18:56.917282 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:18:56.917295 kernel: loop: module loaded Feb 13 20:18:56.917307 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:18:56.917322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:18:56.917335 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:18:56.917347 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:18:56.917360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:18:56.917410 systemd-journald[1096]: Collecting audit messages is disabled. Feb 13 20:18:56.917437 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:18:56.917449 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:18:56.917462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:18:56.917475 systemd-journald[1096]: Journal started Feb 13 20:18:56.917509 systemd-journald[1096]: Runtime Journal (/run/log/journal/b7283ebdaa7845e98f49a21947434e0c) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:18:56.607528 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:18:56.918966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:18:56.629472 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:18:56.630242 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:18:56.920941 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:18:56.920911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:18:56.921065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:18:56.922020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:18:56.922155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:18:56.922778 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:18:56.922955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:18:56.923652 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:18:56.924321 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:18:56.925374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:18:56.940471 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:18:56.947967 kernel: ACPI: bus type drm_connector registered Feb 13 20:18:56.948044 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:18:56.950980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:18:56.951484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:18:56.951533 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:18:56.955131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:18:56.961146 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:18:56.970063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:18:56.970632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:18:56.975638 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:18:56.977319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:18:56.978465 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:18:56.988840 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:18:56.989388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:18:56.992071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:18:56.996082 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:18:57.004042 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:18:57.008725 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:18:57.008989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:18:57.010061 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:18:57.010641 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:18:57.011808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:18:57.022394 systemd-journald[1096]: Time spent on flushing to /var/log/journal/b7283ebdaa7845e98f49a21947434e0c is 53.227ms for 986 entries. Feb 13 20:18:57.022394 systemd-journald[1096]: System Journal (/var/log/journal/b7283ebdaa7845e98f49a21947434e0c) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:18:57.097128 systemd-journald[1096]: Received client request to flush runtime journal. Feb 13 20:18:57.097187 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:18:57.039911 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:18:57.042618 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:18:57.050167 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:18:57.098368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:18:57.099130 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:18:57.107055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:18:57.107917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:18:57.135760 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 20:18:57.149432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:18:57.150551 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:18:57.161556 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Feb 13 20:18:57.162112 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Feb 13 20:18:57.168236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:18:57.174608 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:18:57.177434 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:18:57.188072 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:18:57.206844 kernel: loop2: detected capacity change from 0 to 8 Feb 13 20:18:57.216738 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:18:57.236875 kernel: loop3: detected capacity change from 0 to 140768 Feb 13 20:18:57.266614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:18:57.276049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:18:57.301604 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 20:18:57.301642 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 20:18:57.308922 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:18:57.313075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:18:57.338589 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 20:18:57.383918 kernel: loop6: detected capacity change from 0 to 8 Feb 13 20:18:57.390873 kernel: loop7: detected capacity change from 0 to 140768 Feb 13 20:18:57.426641 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:18:57.427268 (sd-merge)[1174]: Merged extensions into '/usr'. Feb 13 20:18:57.435051 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:18:57.435089 systemd[1]: Reloading... Feb 13 20:18:57.568505 zram_generator::config[1204]: No configuration found. Feb 13 20:18:57.750200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:18:57.785700 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:18:57.821459 systemd[1]: Reloading finished in 385 ms. Feb 13 20:18:57.866947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:18:57.867774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:18:57.879212 systemd[1]: Starting ensure-sysext.service... Feb 13 20:18:57.884212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:18:57.900885 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:18:57.900917 systemd[1]: Reloading... Feb 13 20:18:57.941288 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:18:57.941655 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:18:57.944705 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:18:57.945019 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:18:57.945086 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:18:57.951280 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:18:57.951294 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:18:57.976993 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:18:57.977008 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:18:58.044842 zram_generator::config[1277]: No configuration found. Feb 13 20:18:58.189764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:18:58.241392 systemd[1]: Reloading finished in 340 ms. Feb 13 20:18:58.262381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:18:58.268348 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:18:58.281143 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:18:58.284051 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:18:58.289135 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:18:58.301142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:18:58.306083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:18:58.317174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:18:58.324304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.324516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:18:58.332245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:18:58.336114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:18:58.344160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:18:58.344802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:18:58.344966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.352989 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:18:58.354624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.354813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:18:58.355028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:18:58.355117 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.361651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.361935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:18:58.373346 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:18:58.379431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:18:58.379611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.381891 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:18:58.383035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:18:58.383169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:18:58.397803 systemd[1]: Finished ensure-sysext.service. Feb 13 20:18:58.412281 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:18:58.431569 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:18:58.433148 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:18:58.434315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:18:58.434903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:18:58.435665 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Feb 13 20:18:58.437143 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:18:58.437314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:18:58.438081 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:18:58.438203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:18:58.445607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:18:58.445698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:18:58.451065 augenrules[1348]: No rules Feb 13 20:18:58.454251 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:18:58.456437 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:18:58.463975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:18:58.465238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:18:58.470352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:18:58.498692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:18:58.510032 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:18:58.552761 systemd-resolved[1321]: Positive Trust Anchors: Feb 13 20:18:58.552779 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:18:58.552843 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:18:58.563296 systemd-resolved[1321]: Using system hostname 'ci-4081.3.1-6-670b8c47e7'. Feb 13 20:18:58.570868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:18:58.571364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:18:58.582023 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:18:58.582669 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:18:58.608316 systemd-networkd[1365]: lo: Link UP Feb 13 20:18:58.608341 systemd-networkd[1365]: lo: Gained carrier Feb 13 20:18:58.609554 systemd-networkd[1365]: Enumeration completed Feb 13 20:18:58.609677 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:18:58.610315 systemd[1]: Reached target network.target - Network. Feb 13 20:18:58.620021 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:18:58.657808 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:18:58.681852 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1367) Feb 13 20:18:58.694055 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:18:58.695352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.695503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:18:58.707033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:18:58.709994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:18:58.715979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:18:58.716577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:18:58.716628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:18:58.716644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:18:58.732732 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-c2:22:4e:dd:ed:bf.network. Feb 13 20:18:58.737704 systemd-networkd[1365]: eth1: Link UP Feb 13 20:18:58.737714 systemd-networkd[1365]: eth1: Gained carrier Feb 13 20:18:58.740755 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:18:58.743433 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:18:58.746459 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:18:58.764298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:18:58.764478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:18:58.770617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:18:58.771885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:18:58.777162 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:18:58.780210 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:18:58.780387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:18:58.781174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:18:58.783055 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-42:ec:cf:b9:ca:4a.network. Feb 13 20:18:58.784020 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:18:58.784403 systemd-networkd[1365]: eth0: Link UP Feb 13 20:18:58.784412 systemd-networkd[1365]: eth0: Gained carrier Feb 13 20:18:58.786438 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:18:58.787849 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:18:58.797904 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:18:58.807843 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:18:58.823855 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:18:58.825458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:18:58.835334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:18:58.842849 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:18:58.857352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:18:58.871916 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:18:58.872011 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:18:58.874113 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:18:58.874285 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:18:58.874312 kernel: [drm] features: -context_init Feb 13 20:18:58.876010 kernel: [drm] number of scanouts: 1 Feb 13 20:18:58.876081 kernel: [drm] number of cap sets: 0 Feb 13 20:18:58.876890 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:18:58.882168 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:18:58.882249 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:18:58.887938 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:18:58.916142 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:18:58.920232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:18:58.954010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:18:58.954946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:58.984893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:18:59.095436 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:18:59.102780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:18:59.116198 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:18:59.123069 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:18:59.142997 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:18:59.174096 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:18:59.174623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:18:59.175074 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:18:59.176319 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:18:59.176464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:18:59.176925 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:18:59.177098 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:18:59.177170 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:18:59.177229 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:18:59.177253 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:18:59.177303 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:18:59.179266 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:18:59.181593 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:18:59.188123 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:18:59.191565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:18:59.196261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:18:59.198809 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:18:59.199261 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:18:59.199700 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:18:59.199724 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:18:59.202562 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:18:59.207144 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:18:59.215527 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:18:59.228137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:18:59.235025 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:18:59.238757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:18:59.239229 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:18:59.245041 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:18:59.249971 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:18:59.254094 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:18:59.257794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:18:59.268388 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:18:59.270766 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:18:59.271283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:18:59.281644 extend-filesystems[1434]: Found loop4 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found loop5 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found loop6 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found loop7 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda1 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda2 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda3 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found usr Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda4 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda6 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda7 Feb 13 20:18:59.281644 extend-filesystems[1434]: Found vda9 Feb 13 20:18:59.281644 extend-filesystems[1434]: Checking size of /dev/vda9 Feb 13 20:18:59.281213 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:18:59.282509 dbus-daemon[1431]: [system] SELinux support is enabled Feb 13 20:18:59.300039 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:18:59.301382 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:18:59.312898 jq[1432]: false Feb 13 20:18:59.311807 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:18:59.322349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:18:59.322583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:18:59.332584 jq[1445]: true Feb 13 20:18:59.341779 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:18:59.344315 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:18:59.347892 update_engine[1442]: I20250213 20:18:59.346383 1442 main.cc:92] Flatcar Update Engine starting Feb 13 20:18:59.347411 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:18:59.347500 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:18:59.347524 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:18:59.351788 update_engine[1442]: I20250213 20:18:59.351728 1442 update_check_scheduler.cc:74] Next update check in 3m34s Feb 13 20:18:59.352525 coreos-metadata[1430]: Feb 13 20:18:59.352 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:18:59.354617 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:18:59.367506 coreos-metadata[1430]: Feb 13 20:18:59.363 INFO Fetch successful Feb 13 20:18:59.372042 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:18:59.386421 extend-filesystems[1434]: Resized partition /dev/vda9 Feb 13 20:18:59.390409 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:18:59.390628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:18:59.393335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:18:59.394233 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:18:59.405994 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:18:59.409433 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:18:59.415900 jq[1452]: true Feb 13 20:18:59.423041 tar[1451]: linux-amd64/helm Feb 13 20:18:59.434846 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:18:59.506235 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Feb 13 20:18:59.509168 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:18:59.514792 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:18:59.638018 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:18:59.657074 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:18:59.685872 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:18:59.685872 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:18:59.685872 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:18:59.695306 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Feb 13 20:18:59.695306 extend-filesystems[1434]: Found vdb Feb 13 20:18:59.686672 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:18:59.687059 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:18:59.693522 systemd-logind[1441]: New seat seat0. Feb 13 20:18:59.694780 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:18:59.694800 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:18:59.699558 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:18:59.725870 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:18:59.725669 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:18:59.742464 systemd[1]: Starting sshkeys.service... Feb 13 20:18:59.785231 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:18:59.794496 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:18:59.900010 coreos-metadata[1502]: Feb 13 20:18:59.899 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:18:59.919841 coreos-metadata[1502]: Feb 13 20:18:59.919 INFO Fetch successful Feb 13 20:18:59.954180 unknown[1502]: wrote ssh authorized keys file for user: core Feb 13 20:19:00.017252 update-ssh-keys[1510]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:19:00.018674 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:19:00.021517 systemd[1]: Finished sshkeys.service. Feb 13 20:19:00.042946 containerd[1469]: time="2025-02-13T20:19:00.041144235Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:19:00.102843 containerd[1469]: time="2025-02-13T20:19:00.102635260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.105901677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.105960860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.105985201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106209461Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106234072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106304799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106343542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106622128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106649737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106667835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:19:00.107861 containerd[1469]: time="2025-02-13T20:19:00.106681391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.106779792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.107099600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.107261607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.107283779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.107380214Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:19:00.108372 containerd[1469]: time="2025-02-13T20:19:00.107442123Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:19:00.121145 containerd[1469]: time="2025-02-13T20:19:00.121051813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:19:00.121284 containerd[1469]: time="2025-02-13T20:19:00.121179764Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:19:00.121321 containerd[1469]: time="2025-02-13T20:19:00.121275677Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:19:00.121360 containerd[1469]: time="2025-02-13T20:19:00.121334095Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:19:00.121437 containerd[1469]: time="2025-02-13T20:19:00.121368091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:19:00.121668 containerd[1469]: time="2025-02-13T20:19:00.121641989Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122365374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122584174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122617775Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122638463Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122665844Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122682627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122700074Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122721925Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122740756Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122761578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122783086Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.122800379Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.123107944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.123859 containerd[1469]: time="2025-02-13T20:19:00.123146186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123165913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123194561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123233805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123253041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123272598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123286997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123299767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123340217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123377869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123399917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123420936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123470390Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123576114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123599905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124487 containerd[1469]: time="2025-02-13T20:19:00.123616914Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123697705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123718733Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123734815Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123967857Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123981053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.123995337Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.124013606Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:19:00.124999 containerd[1469]: time="2025-02-13T20:19:00.124030277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:19:00.125335 containerd[1469]: time="2025-02-13T20:19:00.124700951Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:19:00.125335 containerd[1469]: time="2025-02-13T20:19:00.124920497Z" level=info msg="Connect containerd service" Feb 13 20:19:00.125335 containerd[1469]: time="2025-02-13T20:19:00.124972236Z" level=info msg="using legacy CRI server" Feb 13 20:19:00.125335 containerd[1469]: time="2025-02-13T20:19:00.124982604Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:19:00.125335 containerd[1469]: time="2025-02-13T20:19:00.125134647Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126218891Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126360193Z" level=info msg="Start subscribing containerd event" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126475310Z" level=info msg="Start recovering state" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126619605Z" level=info msg="Start event monitor" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126638982Z" level=info msg="Start snapshots syncer" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126655027Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:19:00.128629 containerd[1469]: time="2025-02-13T20:19:00.126667267Z" level=info msg="Start streaming server" Feb 13 20:19:00.131272 containerd[1469]: time="2025-02-13T20:19:00.130434205Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:19:00.131272 containerd[1469]: time="2025-02-13T20:19:00.130554422Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:19:00.130791 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:19:00.141438 containerd[1469]: time="2025-02-13T20:19:00.140898641Z" level=info msg="containerd successfully booted in 0.102243s" Feb 13 20:19:00.150397 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:19:00.198703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:19:00.211203 systemd-networkd[1365]: eth0: Gained IPv6LL Feb 13 20:19:00.211735 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:00.214398 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:19:00.224401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:19:00.229347 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:19:00.243070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:00.256331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:19:00.271969 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:19:00.272423 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:19:00.290300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:19:00.355280 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:19:00.373588 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:19:00.386400 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:19:00.388316 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:19:00.392980 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:19:00.660461 tar[1451]: linux-amd64/LICENSE Feb 13 20:19:00.661861 tar[1451]: linux-amd64/README.md Feb 13 20:19:00.681498 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:19:00.723034 systemd-networkd[1365]: eth1: Gained IPv6LL Feb 13 20:19:00.723626 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:01.716107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:01.717444 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:19:01.720940 systemd[1]: Startup finished in 1.159s (kernel) + 5.252s (initrd) + 5.801s (userspace) = 12.214s. Feb 13 20:19:01.729713 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:19:02.648988 kubelet[1552]: E0213 20:19:02.648852 1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:19:02.653284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:19:02.653562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:19:02.654521 systemd[1]: kubelet.service: Consumed 1.504s CPU time. Feb 13 20:19:04.520462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:19:04.529229 systemd[1]: Started sshd@0-165.232.153.54:22-147.75.109.163:47728.service - OpenSSH per-connection server daemon (147.75.109.163:47728). Feb 13 20:19:04.607253 sshd[1565]: Accepted publickey for core from 147.75.109.163 port 47728 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:04.610116 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:04.620150 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:19:04.630314 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:19:04.634327 systemd-logind[1441]: New session 1 of user core. Feb 13 20:19:04.647656 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:19:04.664394 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:19:04.669079 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:19:04.786776 systemd[1569]: Queued start job for default target default.target. Feb 13 20:19:04.799279 systemd[1569]: Created slice app.slice - User Application Slice. Feb 13 20:19:04.799326 systemd[1569]: Reached target paths.target - Paths. Feb 13 20:19:04.799342 systemd[1569]: Reached target timers.target - Timers. Feb 13 20:19:04.801041 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:19:04.816729 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:19:04.816913 systemd[1569]: Reached target sockets.target - Sockets. Feb 13 20:19:04.816961 systemd[1569]: Reached target basic.target - Basic System. Feb 13 20:19:04.817033 systemd[1569]: Reached target default.target - Main User Target. Feb 13 20:19:04.817069 systemd[1569]: Startup finished in 137ms. Feb 13 20:19:04.817554 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:19:04.827154 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:19:04.891761 systemd[1]: Started sshd@1-165.232.153.54:22-147.75.109.163:47736.service - OpenSSH per-connection server daemon (147.75.109.163:47736). Feb 13 20:19:04.974936 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 47736 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:04.979305 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:04.987176 systemd-logind[1441]: New session 2 of user core. Feb 13 20:19:04.999146 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:19:05.063533 sshd[1580]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.066916 systemd[1]: sshd@1-165.232.153.54:22-147.75.109.163:47736.service: Deactivated successfully. Feb 13 20:19:05.069025 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:19:05.079443 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:19:05.084387 systemd[1]: Started sshd@2-165.232.153.54:22-147.75.109.163:47742.service - OpenSSH per-connection server daemon (147.75.109.163:47742). Feb 13 20:19:05.086394 systemd-logind[1441]: Removed session 2. Feb 13 20:19:05.145361 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 47742 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:05.147066 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.152254 systemd-logind[1441]: New session 3 of user core. Feb 13 20:19:05.171192 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:19:05.229046 sshd[1587]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.243428 systemd[1]: sshd@2-165.232.153.54:22-147.75.109.163:47742.service: Deactivated successfully. Feb 13 20:19:05.246096 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:19:05.248222 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:19:05.253370 systemd[1]: Started sshd@3-165.232.153.54:22-147.75.109.163:47748.service - OpenSSH per-connection server daemon (147.75.109.163:47748). Feb 13 20:19:05.256590 systemd-logind[1441]: Removed session 3. Feb 13 20:19:05.312365 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 47748 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:05.314726 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.323607 systemd-logind[1441]: New session 4 of user core. Feb 13 20:19:05.326098 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:19:05.390465 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.401141 systemd[1]: sshd@3-165.232.153.54:22-147.75.109.163:47748.service: Deactivated successfully. Feb 13 20:19:05.404187 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:19:05.407228 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:19:05.413278 systemd[1]: Started sshd@4-165.232.153.54:22-147.75.109.163:47756.service - OpenSSH per-connection server daemon (147.75.109.163:47756). Feb 13 20:19:05.415763 systemd-logind[1441]: Removed session 4. Feb 13 20:19:05.524783 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 47756 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:05.526646 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.534557 systemd-logind[1441]: New session 5 of user core. Feb 13 20:19:05.547092 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:19:05.616776 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:19:05.617545 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:19:05.637340 sudo[1604]: pam_unix(sudo:session): session closed for user root Feb 13 20:19:05.641930 sshd[1601]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.650803 systemd[1]: sshd@4-165.232.153.54:22-147.75.109.163:47756.service: Deactivated successfully. Feb 13 20:19:05.652872 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:19:05.654358 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:19:05.661288 systemd[1]: Started sshd@5-165.232.153.54:22-147.75.109.163:47770.service - OpenSSH per-connection server daemon (147.75.109.163:47770). Feb 13 20:19:05.663329 systemd-logind[1441]: Removed session 5. Feb 13 20:19:05.707050 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 47770 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:05.709360 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.714701 systemd-logind[1441]: New session 6 of user core. Feb 13 20:19:05.724092 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:19:05.783836 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:19:05.784310 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:19:05.789575 sudo[1613]: pam_unix(sudo:session): session closed for user root Feb 13 20:19:05.797085 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:19:05.797405 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:19:05.818545 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:19:05.820486 auditctl[1616]: No rules Feb 13 20:19:05.821143 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:19:05.821549 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:19:05.830442 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:19:05.866419 augenrules[1634]: No rules Feb 13 20:19:05.868411 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:19:05.872303 sudo[1612]: pam_unix(sudo:session): session closed for user root Feb 13 20:19:05.876445 sshd[1609]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.888688 systemd[1]: sshd@5-165.232.153.54:22-147.75.109.163:47770.service: Deactivated successfully. Feb 13 20:19:05.891636 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:19:05.894160 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:19:05.903357 systemd[1]: Started sshd@6-165.232.153.54:22-147.75.109.163:47774.service - OpenSSH per-connection server daemon (147.75.109.163:47774). Feb 13 20:19:05.905794 systemd-logind[1441]: Removed session 6. Feb 13 20:19:05.950313 sshd[1642]: Accepted publickey for core from 147.75.109.163 port 47774 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:19:05.952720 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.961177 systemd-logind[1441]: New session 7 of user core. Feb 13 20:19:05.968187 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:19:06.028755 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:19:06.029508 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:19:06.509146 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:19:06.518435 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:19:06.954313 dockerd[1661]: time="2025-02-13T20:19:06.953255492Z" level=info msg="Starting up" Feb 13 20:19:07.091126 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport956325199-merged.mount: Deactivated successfully. Feb 13 20:19:07.130332 dockerd[1661]: time="2025-02-13T20:19:07.130100701Z" level=info msg="Loading containers: start." Feb 13 20:19:07.255897 kernel: Initializing XFRM netlink socket Feb 13 20:19:07.282955 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:07.285034 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:07.294313 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:07.350546 systemd-networkd[1365]: docker0: Link UP Feb 13 20:19:07.350980 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Feb 13 20:19:07.379623 dockerd[1661]: time="2025-02-13T20:19:07.379554892Z" level=info msg="Loading containers: done." Feb 13 20:19:07.401885 dockerd[1661]: time="2025-02-13T20:19:07.401547789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:19:07.401885 dockerd[1661]: time="2025-02-13T20:19:07.401672396Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:19:07.401885 dockerd[1661]: time="2025-02-13T20:19:07.401802883Z" level=info msg="Daemon has completed initialization" Feb 13 20:19:07.453617 dockerd[1661]: time="2025-02-13T20:19:07.453468614Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:19:07.453743 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:19:08.474558 containerd[1469]: time="2025-02-13T20:19:08.474497896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:19:09.100213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166383025.mount: Deactivated successfully. Feb 13 20:19:10.456039 containerd[1469]: time="2025-02-13T20:19:10.454952565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:10.457988 containerd[1469]: time="2025-02-13T20:19:10.457898298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:19:10.460297 containerd[1469]: time="2025-02-13T20:19:10.458769661Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:10.462379 containerd[1469]: time="2025-02-13T20:19:10.462329498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:10.464365 containerd[1469]: time="2025-02-13T20:19:10.464308479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.989749984s" Feb 13 20:19:10.464763 containerd[1469]: time="2025-02-13T20:19:10.464370842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:19:10.496929 containerd[1469]: time="2025-02-13T20:19:10.496832990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:19:12.390881 containerd[1469]: time="2025-02-13T20:19:12.389301716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:12.392031 containerd[1469]: time="2025-02-13T20:19:12.391955914Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:19:12.393270 containerd[1469]: time="2025-02-13T20:19:12.393179860Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:12.399843 containerd[1469]: time="2025-02-13T20:19:12.399755008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:12.402317 containerd[1469]: time="2025-02-13T20:19:12.402235138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.905154255s" Feb 13 20:19:12.402317 containerd[1469]: time="2025-02-13T20:19:12.402297379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:19:12.439405 containerd[1469]: time="2025-02-13T20:19:12.439344803Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:19:12.903968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:19:12.913893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:13.124147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:13.139284 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:19:13.231762 kubelet[1888]: E0213 20:19:13.230286 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:19:13.239134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:19:13.239297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:19:13.953029 containerd[1469]: time="2025-02-13T20:19:13.952948483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:13.955891 containerd[1469]: time="2025-02-13T20:19:13.955786673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:19:13.957673 containerd[1469]: time="2025-02-13T20:19:13.957576941Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:13.966856 containerd[1469]: time="2025-02-13T20:19:13.965175733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:13.966856 containerd[1469]: time="2025-02-13T20:19:13.966375467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.526969024s" Feb 13 20:19:13.966856 containerd[1469]: time="2025-02-13T20:19:13.966417236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:19:14.001353 containerd[1469]: time="2025-02-13T20:19:14.001272470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:19:15.145773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284223993.mount: Deactivated successfully. Feb 13 20:19:15.922927 containerd[1469]: time="2025-02-13T20:19:15.922003748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:15.923668 containerd[1469]: time="2025-02-13T20:19:15.923614227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:19:15.925108 containerd[1469]: time="2025-02-13T20:19:15.925027356Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:15.929170 containerd[1469]: time="2025-02-13T20:19:15.927653034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:15.929170 containerd[1469]: time="2025-02-13T20:19:15.928983335Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.927634744s" Feb 13 20:19:15.929170 containerd[1469]: time="2025-02-13T20:19:15.929023895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:19:15.964214 containerd[1469]: time="2025-02-13T20:19:15.964048570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:19:15.966591 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:19:16.636652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673753593.mount: Deactivated successfully. Feb 13 20:19:17.941695 containerd[1469]: time="2025-02-13T20:19:17.940036328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:17.942399 containerd[1469]: time="2025-02-13T20:19:17.942341730Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:19:17.943264 containerd[1469]: time="2025-02-13T20:19:17.943228099Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:17.948327 containerd[1469]: time="2025-02-13T20:19:17.948276197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:17.949279 containerd[1469]: time="2025-02-13T20:19:17.949232351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.985113522s" Feb 13 20:19:17.949279 containerd[1469]: time="2025-02-13T20:19:17.949274065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:19:17.981640 containerd[1469]: time="2025-02-13T20:19:17.981604654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:19:18.483165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436568345.mount: Deactivated successfully. Feb 13 20:19:18.493569 containerd[1469]: time="2025-02-13T20:19:18.492454444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:18.494986 containerd[1469]: time="2025-02-13T20:19:18.494885387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:19:18.497065 containerd[1469]: time="2025-02-13T20:19:18.497010770Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:18.501006 containerd[1469]: time="2025-02-13T20:19:18.500930471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:18.502412 containerd[1469]: time="2025-02-13T20:19:18.502213513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 520.371761ms" Feb 13 20:19:18.502412 containerd[1469]: time="2025-02-13T20:19:18.502255560Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:19:18.542803 containerd[1469]: time="2025-02-13T20:19:18.542749722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:19:19.027220 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:19:19.066422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552980090.mount: Deactivated successfully. Feb 13 20:19:21.501755 containerd[1469]: time="2025-02-13T20:19:21.500144337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:21.501755 containerd[1469]: time="2025-02-13T20:19:21.501694125Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:19:21.502467 containerd[1469]: time="2025-02-13T20:19:21.502435727Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:21.505791 containerd[1469]: time="2025-02-13T20:19:21.505744089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:21.507192 containerd[1469]: time="2025-02-13T20:19:21.507150791Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.964348216s" Feb 13 20:19:21.507312 containerd[1469]: time="2025-02-13T20:19:21.507193288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:19:23.489675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:19:23.500056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:23.664175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:23.666776 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:19:23.729854 kubelet[2088]: E0213 20:19:23.729268 2088 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:19:23.733961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:19:23.734354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:19:24.378246 systemd[1]: Started sshd@7-165.232.153.54:22-218.92.0.167:27100.service - OpenSSH per-connection server daemon (218.92.0.167:27100). Feb 13 20:19:24.430198 systemd[1]: Started sshd@8-165.232.153.54:22-218.92.0.167:27390.service - OpenSSH per-connection server daemon (218.92.0.167:27390). Feb 13 20:19:25.011293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:25.025101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:25.048023 systemd[1]: Reloading requested from client PID 2108 ('systemctl') (unit session-7.scope)... Feb 13 20:19:25.048053 systemd[1]: Reloading... Feb 13 20:19:25.209898 zram_generator::config[2153]: No configuration found. Feb 13 20:19:25.333806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:19:25.422420 systemd[1]: Reloading finished in 373 ms. Feb 13 20:19:25.479759 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:19:25.479929 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:19:25.480294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:25.490374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:25.534356 sshd[2190]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:25.615084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:25.627686 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:19:25.641232 sshd[2196]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:25.687253 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:19:25.687253 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:19:25.687253 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:19:25.688697 kubelet[2206]: I0213 20:19:25.688487 2206 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:19:26.019019 kubelet[2206]: I0213 20:19:26.018444 2206 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:19:26.019019 kubelet[2206]: I0213 20:19:26.018487 2206 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:19:26.019019 kubelet[2206]: I0213 20:19:26.018771 2206 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:19:26.041500 kubelet[2206]: I0213 20:19:26.041135 2206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:19:26.042079 kubelet[2206]: E0213 20:19:26.042029 2206 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://165.232.153.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.063608 kubelet[2206]: I0213 20:19:26.063515 2206 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:19:26.066562 kubelet[2206]: I0213 20:19:26.066487 2206 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:19:26.066741 kubelet[2206]: I0213 20:19:26.066560 2206 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-6-670b8c47e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:19:26.067344 kubelet[2206]: I0213 20:19:26.067287 2206 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:19:26.067344 kubelet[2206]: I0213 20:19:26.067330 2206 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:19:26.067580 kubelet[2206]: I0213 20:19:26.067555 2206 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:19:26.068687 kubelet[2206]: I0213 20:19:26.068468 2206 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:19:26.068798 kubelet[2206]: I0213 20:19:26.068722 2206 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:19:26.068798 kubelet[2206]: I0213 20:19:26.068780 2206 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:19:26.068912 kubelet[2206]: I0213 20:19:26.068802 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:19:26.074024 kubelet[2206]: W0213 20:19:26.073412 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.153.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.074024 kubelet[2206]: E0213 20:19:26.073514 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.153.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.074024 kubelet[2206]: I0213 20:19:26.073627 2206 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:19:26.076288 kubelet[2206]: I0213 20:19:26.075411 2206 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:19:26.076288 kubelet[2206]: W0213 20:19:26.075490 2206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:19:26.076288 kubelet[2206]: I0213 20:19:26.076222 2206 server.go:1264] "Started kubelet" Feb 13 20:19:26.081596 kubelet[2206]: W0213 20:19:26.081556 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.153.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-670b8c47e7&limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.081830 kubelet[2206]: E0213 20:19:26.081798 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.153.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-670b8c47e7&limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.082639 kubelet[2206]: E0213 20:19:26.082511 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.153.54:6443/api/v1/namespaces/default/events\": dial tcp 165.232.153.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-6-670b8c47e7.1823ddfe76973940 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-6-670b8c47e7,UID:ci-4081.3.1-6-670b8c47e7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-6-670b8c47e7,},FirstTimestamp:2025-02-13 20:19:26.076197184 +0000 UTC m=+0.443667068,LastTimestamp:2025-02-13 20:19:26.076197184 +0000 UTC m=+0.443667068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-6-670b8c47e7,}" Feb 13 20:19:26.085859 kubelet[2206]: I0213 20:19:26.083852 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:19:26.085859 kubelet[2206]: I0213 20:19:26.084707 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:19:26.085859 kubelet[2206]: I0213 20:19:26.084897 2206 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:19:26.085859 kubelet[2206]: I0213 20:19:26.084947 2206 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:19:26.087462 kubelet[2206]: I0213 20:19:26.087090 2206 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:19:26.098308 kubelet[2206]: I0213 20:19:26.098277 2206 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:19:26.098954 kubelet[2206]: I0213 20:19:26.098885 2206 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:19:26.099072 kubelet[2206]: I0213 20:19:26.098986 2206 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:19:26.099490 kubelet[2206]: W0213 20:19:26.099387 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.153.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.099490 kubelet[2206]: E0213 20:19:26.099456 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.153.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.099490 kubelet[2206]: E0213 20:19:26.099467 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.153.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-670b8c47e7?timeout=10s\": dial tcp 165.232.153.54:6443: connect: connection refused" interval="200ms" Feb 13 20:19:26.100374 kubelet[2206]: E0213 20:19:26.100157 2206 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:19:26.101186 kubelet[2206]: I0213 20:19:26.101166 2206 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:19:26.101270 kubelet[2206]: I0213 20:19:26.101194 2206 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:19:26.101322 kubelet[2206]: I0213 20:19:26.101307 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:19:26.120736 kubelet[2206]: I0213 20:19:26.119757 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:19:26.120736 kubelet[2206]: I0213 20:19:26.119779 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:19:26.120736 kubelet[2206]: I0213 20:19:26.119812 2206 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:19:26.126385 kubelet[2206]: I0213 20:19:26.126340 2206 policy_none.go:49] "None policy: Start" Feb 13 20:19:26.131281 kubelet[2206]: I0213 20:19:26.131255 2206 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:19:26.131721 kubelet[2206]: I0213 20:19:26.131422 2206 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:19:26.134112 kubelet[2206]: I0213 20:19:26.134065 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:19:26.136258 kubelet[2206]: I0213 20:19:26.135602 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:19:26.136258 kubelet[2206]: I0213 20:19:26.135633 2206 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:19:26.136258 kubelet[2206]: I0213 20:19:26.135667 2206 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:19:26.136258 kubelet[2206]: E0213 20:19:26.135745 2206 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:19:26.144947 kubelet[2206]: W0213 20:19:26.144889 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.153.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.144947 kubelet[2206]: E0213 20:19:26.144949 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.153.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:26.148636 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:19:26.160210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:19:26.163677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:19:26.172097 kubelet[2206]: I0213 20:19:26.172051 2206 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:19:26.172750 kubelet[2206]: I0213 20:19:26.172265 2206 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:19:26.172750 kubelet[2206]: I0213 20:19:26.172388 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:19:26.175386 kubelet[2206]: E0213 20:19:26.175360 2206 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-6-670b8c47e7\" not found" Feb 13 20:19:26.200114 kubelet[2206]: I0213 20:19:26.199953 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.200624 kubelet[2206]: E0213 20:19:26.200496 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.153.54:6443/api/v1/nodes\": dial tcp 165.232.153.54:6443: connect: connection refused" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.236835 kubelet[2206]: I0213 20:19:26.236705 2206 topology_manager.go:215] "Topology Admit Handler" podUID="fbed958039158594c157c7d0372e5d6f" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.238462 kubelet[2206]: I0213 20:19:26.238036 2206 topology_manager.go:215] "Topology Admit Handler" podUID="aea61bbb9df9969305b5579ffc02b3c3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.238961 kubelet[2206]: I0213 20:19:26.238931 2206 topology_manager.go:215] "Topology Admit Handler" podUID="f44d5b855e98264e6436eada9e03c576" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.247446 systemd[1]: Created slice kubepods-burstable-podfbed958039158594c157c7d0372e5d6f.slice - libcontainer container kubepods-burstable-podfbed958039158594c157c7d0372e5d6f.slice. Feb 13 20:19:26.273199 systemd[1]: Created slice kubepods-burstable-podaea61bbb9df9969305b5579ffc02b3c3.slice - libcontainer container kubepods-burstable-podaea61bbb9df9969305b5579ffc02b3c3.slice. Feb 13 20:19:26.282077 systemd[1]: Created slice kubepods-burstable-podf44d5b855e98264e6436eada9e03c576.slice - libcontainer container kubepods-burstable-podf44d5b855e98264e6436eada9e03c576.slice. Feb 13 20:19:26.300968 kubelet[2206]: E0213 20:19:26.300911 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.153.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-670b8c47e7?timeout=10s\": dial tcp 165.232.153.54:6443: connect: connection refused" interval="400ms" Feb 13 20:19:26.301098 kubelet[2206]: I0213 20:19:26.301065 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f44d5b855e98264e6436eada9e03c576-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-6-670b8c47e7\" (UID: \"f44d5b855e98264e6436eada9e03c576\") " pod="kube-system/kube-scheduler-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301153 kubelet[2206]: I0213 20:19:26.301112 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301153 kubelet[2206]: I0213 20:19:26.301139 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301215 kubelet[2206]: I0213 20:19:26.301156 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301215 kubelet[2206]: I0213 20:19:26.301174 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301215 kubelet[2206]: I0213 20:19:26.301188 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301215 kubelet[2206]: I0213 20:19:26.301203 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301328 kubelet[2206]: I0213 20:19:26.301219 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.301328 kubelet[2206]: I0213 20:19:26.301243 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.402800 kubelet[2206]: I0213 20:19:26.402499 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.403085 kubelet[2206]: E0213 20:19:26.403061 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.153.54:6443/api/v1/nodes\": dial tcp 165.232.153.54:6443: connect: connection refused" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.568463 kubelet[2206]: E0213 20:19:26.568290 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:26.569306 containerd[1469]: time="2025-02-13T20:19:26.569157500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-6-670b8c47e7,Uid:fbed958039158594c157c7d0372e5d6f,Namespace:kube-system,Attempt:0,}" Feb 13 20:19:26.571227 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 20:19:26.580124 kubelet[2206]: E0213 20:19:26.579471 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:26.585454 containerd[1469]: time="2025-02-13T20:19:26.585403940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-6-670b8c47e7,Uid:aea61bbb9df9969305b5579ffc02b3c3,Namespace:kube-system,Attempt:0,}" Feb 13 20:19:26.587152 kubelet[2206]: E0213 20:19:26.587119 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:26.587644 containerd[1469]: time="2025-02-13T20:19:26.587601723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-6-670b8c47e7,Uid:f44d5b855e98264e6436eada9e03c576,Namespace:kube-system,Attempt:0,}" Feb 13 20:19:26.702017 kubelet[2206]: E0213 20:19:26.701969 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.153.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-670b8c47e7?timeout=10s\": dial tcp 165.232.153.54:6443: connect: connection refused" interval="800ms" Feb 13 20:19:26.804789 kubelet[2206]: I0213 20:19:26.804740 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:26.805463 kubelet[2206]: E0213 20:19:26.805409 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.153.54:6443/api/v1/nodes\": dial tcp 165.232.153.54:6443: connect: connection refused" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:27.105341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171560721.mount: Deactivated successfully. Feb 13 20:19:27.116874 containerd[1469]: time="2025-02-13T20:19:27.114636635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:19:27.116874 containerd[1469]: time="2025-02-13T20:19:27.115868369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:19:27.117141 containerd[1469]: time="2025-02-13T20:19:27.117095437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:19:27.117390 containerd[1469]: time="2025-02-13T20:19:27.117341282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:19:27.119104 containerd[1469]: time="2025-02-13T20:19:27.119032632Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:19:27.120692 containerd[1469]: time="2025-02-13T20:19:27.120190616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:19:27.120692 containerd[1469]: time="2025-02-13T20:19:27.120463215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:19:27.123417 containerd[1469]: time="2025-02-13T20:19:27.123349373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:19:27.126472 containerd[1469]: time="2025-02-13T20:19:27.125873448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 556.625378ms" Feb 13 20:19:27.127284 containerd[1469]: time="2025-02-13T20:19:27.127232633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.50947ms" Feb 13 20:19:27.129720 containerd[1469]: time="2025-02-13T20:19:27.129673679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.966741ms" Feb 13 20:19:27.287485 kubelet[2206]: W0213 20:19:27.287367 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.153.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.287485 kubelet[2206]: E0213 20:19:27.287452 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.153.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.318341 containerd[1469]: time="2025-02-13T20:19:27.317937036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:27.318341 containerd[1469]: time="2025-02-13T20:19:27.318005330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:27.318341 containerd[1469]: time="2025-02-13T20:19:27.318024190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.318341 containerd[1469]: time="2025-02-13T20:19:27.318294074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.335078 containerd[1469]: time="2025-02-13T20:19:27.329931484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:27.335078 containerd[1469]: time="2025-02-13T20:19:27.330664807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:27.335078 containerd[1469]: time="2025-02-13T20:19:27.330809391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.335078 containerd[1469]: time="2025-02-13T20:19:27.331362415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.339730 containerd[1469]: time="2025-02-13T20:19:27.338597182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:27.339730 containerd[1469]: time="2025-02-13T20:19:27.338677408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:27.339730 containerd[1469]: time="2025-02-13T20:19:27.338693880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.339730 containerd[1469]: time="2025-02-13T20:19:27.338843952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:27.368346 kubelet[2206]: W0213 20:19:27.367498 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.153.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.368920 kubelet[2206]: E0213 20:19:27.368887 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.153.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.370131 systemd[1]: Started cri-containerd-56a45f3b91bcf425129b2df96cbc5fcd250981f2d5919d04301ee201786ad029.scope - libcontainer container 56a45f3b91bcf425129b2df96cbc5fcd250981f2d5919d04301ee201786ad029. Feb 13 20:19:27.379405 systemd[1]: Started cri-containerd-d47c510e0f3afc579b693f8656889c6da70cc3c888d3036a246ef4ecbe7dc089.scope - libcontainer container d47c510e0f3afc579b693f8656889c6da70cc3c888d3036a246ef4ecbe7dc089. Feb 13 20:19:27.391367 systemd[1]: Started cri-containerd-77550f625d5f3431936304fb16d9baf8e6303d810f2574794cc3851e75a76165.scope - libcontainer container 77550f625d5f3431936304fb16d9baf8e6303d810f2574794cc3851e75a76165. Feb 13 20:19:27.479315 containerd[1469]: time="2025-02-13T20:19:27.479156687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-6-670b8c47e7,Uid:aea61bbb9df9969305b5579ffc02b3c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a45f3b91bcf425129b2df96cbc5fcd250981f2d5919d04301ee201786ad029\"" Feb 13 20:19:27.486172 containerd[1469]: time="2025-02-13T20:19:27.485992424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-6-670b8c47e7,Uid:fbed958039158594c157c7d0372e5d6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47c510e0f3afc579b693f8656889c6da70cc3c888d3036a246ef4ecbe7dc089\"" Feb 13 20:19:27.487415 kubelet[2206]: E0213 20:19:27.487358 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:27.489872 kubelet[2206]: E0213 20:19:27.489374 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:27.493554 containerd[1469]: time="2025-02-13T20:19:27.493500090Z" level=info msg="CreateContainer within sandbox \"56a45f3b91bcf425129b2df96cbc5fcd250981f2d5919d04301ee201786ad029\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:19:27.504171 kubelet[2206]: E0213 20:19:27.504115 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.153.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-670b8c47e7?timeout=10s\": dial tcp 165.232.153.54:6443: connect: connection refused" interval="1.6s" Feb 13 20:19:27.510712 containerd[1469]: time="2025-02-13T20:19:27.510628396Z" level=info msg="CreateContainer within sandbox \"d47c510e0f3afc579b693f8656889c6da70cc3c888d3036a246ef4ecbe7dc089\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:19:27.512241 containerd[1469]: time="2025-02-13T20:19:27.512199624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-6-670b8c47e7,Uid:f44d5b855e98264e6436eada9e03c576,Namespace:kube-system,Attempt:0,} returns sandbox id \"77550f625d5f3431936304fb16d9baf8e6303d810f2574794cc3851e75a76165\"" Feb 13 20:19:27.514635 kubelet[2206]: E0213 20:19:27.514337 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:27.517196 containerd[1469]: time="2025-02-13T20:19:27.517149596Z" level=info msg="CreateContainer within sandbox \"77550f625d5f3431936304fb16d9baf8e6303d810f2574794cc3851e75a76165\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:19:27.540336 containerd[1469]: time="2025-02-13T20:19:27.540256047Z" level=info msg="CreateContainer within sandbox \"56a45f3b91bcf425129b2df96cbc5fcd250981f2d5919d04301ee201786ad029\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d9d1638c01c3ec9d290cefa2cda94994db67a6215204a0248b2c8deecf5e7f4\"" Feb 13 20:19:27.541312 containerd[1469]: time="2025-02-13T20:19:27.541253724Z" level=info msg="StartContainer for \"9d9d1638c01c3ec9d290cefa2cda94994db67a6215204a0248b2c8deecf5e7f4\"" Feb 13 20:19:27.548712 sshd[2097]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:27.565758 containerd[1469]: time="2025-02-13T20:19:27.565691812Z" level=info msg="CreateContainer within sandbox \"77550f625d5f3431936304fb16d9baf8e6303d810f2574794cc3851e75a76165\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"797d7d50b29745bc669915ce3a5b9a9a72247f3f6185a6b0c2702999848177c2\"" Feb 13 20:19:27.567138 containerd[1469]: time="2025-02-13T20:19:27.567102997Z" level=info msg="StartContainer for \"797d7d50b29745bc669915ce3a5b9a9a72247f3f6185a6b0c2702999848177c2\"" Feb 13 20:19:27.574735 containerd[1469]: time="2025-02-13T20:19:27.574259774Z" level=info msg="CreateContainer within sandbox \"d47c510e0f3afc579b693f8656889c6da70cc3c888d3036a246ef4ecbe7dc089\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"75b27499c4e1b8ccd9b076f866986f20f4d0237633b46fe696d426cb08b77299\"" Feb 13 20:19:27.576889 containerd[1469]: time="2025-02-13T20:19:27.575811790Z" level=info msg="StartContainer for \"75b27499c4e1b8ccd9b076f866986f20f4d0237633b46fe696d426cb08b77299\"" Feb 13 20:19:27.578117 systemd[1]: Started cri-containerd-9d9d1638c01c3ec9d290cefa2cda94994db67a6215204a0248b2c8deecf5e7f4.scope - libcontainer container 9d9d1638c01c3ec9d290cefa2cda94994db67a6215204a0248b2c8deecf5e7f4. Feb 13 20:19:27.587887 kubelet[2206]: W0213 20:19:27.587073 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.153.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.588286 kubelet[2206]: E0213 20:19:27.588255 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.153.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.589902 kubelet[2206]: W0213 20:19:27.588473 2206 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.153.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-670b8c47e7&limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.589902 kubelet[2206]: E0213 20:19:27.588611 2206 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.153.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-670b8c47e7&limit=500&resourceVersion=0": dial tcp 165.232.153.54:6443: connect: connection refused Feb 13 20:19:27.611092 kubelet[2206]: I0213 20:19:27.611059 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:27.612113 kubelet[2206]: E0213 20:19:27.611968 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.153.54:6443/api/v1/nodes\": dial tcp 165.232.153.54:6443: connect: connection refused" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:27.629080 systemd[1]: Started cri-containerd-797d7d50b29745bc669915ce3a5b9a9a72247f3f6185a6b0c2702999848177c2.scope - libcontainer container 797d7d50b29745bc669915ce3a5b9a9a72247f3f6185a6b0c2702999848177c2. Feb 13 20:19:27.646575 systemd[1]: Started cri-containerd-75b27499c4e1b8ccd9b076f866986f20f4d0237633b46fe696d426cb08b77299.scope - libcontainer container 75b27499c4e1b8ccd9b076f866986f20f4d0237633b46fe696d426cb08b77299. Feb 13 20:19:27.655344 sshd[2100]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:27.683886 containerd[1469]: time="2025-02-13T20:19:27.682992400Z" level=info msg="StartContainer for \"9d9d1638c01c3ec9d290cefa2cda94994db67a6215204a0248b2c8deecf5e7f4\" returns successfully" Feb 13 20:19:27.723045 containerd[1469]: time="2025-02-13T20:19:27.723003733Z" level=info msg="StartContainer for \"797d7d50b29745bc669915ce3a5b9a9a72247f3f6185a6b0c2702999848177c2\" returns successfully" Feb 13 20:19:27.745373 containerd[1469]: time="2025-02-13T20:19:27.745306704Z" level=info msg="StartContainer for \"75b27499c4e1b8ccd9b076f866986f20f4d0237633b46fe696d426cb08b77299\" returns successfully" Feb 13 20:19:27.860084 sshd[2451]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:27.985060 sshd[2477]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:28.152435 kubelet[2206]: E0213 20:19:28.152299 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:28.154930 kubelet[2206]: E0213 20:19:28.154646 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:28.159678 kubelet[2206]: E0213 20:19:28.159645 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:29.159536 kubelet[2206]: E0213 20:19:29.159490 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:29.213377 kubelet[2206]: I0213 20:19:29.213332 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:29.814138 sshd[2097]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:29.868761 kubelet[2206]: E0213 20:19:29.868717 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-6-670b8c47e7\" not found" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:29.939050 sshd[2100]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:29.974168 kubelet[2206]: I0213 20:19:29.973894 2206 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:30.005440 kubelet[2206]: E0213 20:19:30.005385 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-670b8c47e7\" not found" Feb 13 20:19:30.105995 kubelet[2206]: E0213 20:19:30.105846 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-670b8c47e7\" not found" Feb 13 20:19:30.125292 sshd[2482]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:30.206211 kubelet[2206]: E0213 20:19:30.206151 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-670b8c47e7\" not found" Feb 13 20:19:30.267130 sshd[2483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:19:30.306633 kubelet[2206]: E0213 20:19:30.306555 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-670b8c47e7\" not found" Feb 13 20:19:31.076315 kubelet[2206]: I0213 20:19:31.076243 2206 apiserver.go:52] "Watching apiserver" Feb 13 20:19:31.099164 kubelet[2206]: I0213 20:19:31.099113 2206 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:19:31.965606 sshd[2100]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:32.019204 sshd[2097]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:19:32.129209 sshd[2100]: Received disconnect from 218.92.0.167 port 27390:11: [preauth] Feb 13 20:19:32.129209 sshd[2100]: Disconnected from authenticating user root 218.92.0.167 port 27390 [preauth] Feb 13 20:19:32.131473 systemd[1]: sshd@8-165.232.153.54:22-218.92.0.167:27390.service: Deactivated successfully. Feb 13 20:19:32.175001 sshd[2097]: Received disconnect from 218.92.0.167 port 27100:11: [preauth] Feb 13 20:19:32.175001 sshd[2097]: Disconnected from authenticating user root 218.92.0.167 port 27100 [preauth] Feb 13 20:19:32.175916 systemd[1]: sshd@7-165.232.153.54:22-218.92.0.167:27100.service: Deactivated successfully. Feb 13 20:19:32.580555 systemd[1]: Reloading requested from client PID 2489 ('systemctl') (unit session-7.scope)... Feb 13 20:19:32.580787 systemd[1]: Reloading... Feb 13 20:19:32.696865 zram_generator::config[2531]: No configuration found. Feb 13 20:19:32.840903 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:19:32.964587 systemd[1]: Reloading finished in 383 ms. Feb 13 20:19:33.011431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:33.011716 kubelet[2206]: E0213 20:19:33.011582 2206 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.1-6-670b8c47e7.1823ddfe76973940 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-6-670b8c47e7,UID:ci-4081.3.1-6-670b8c47e7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-6-670b8c47e7,},FirstTimestamp:2025-02-13 20:19:26.076197184 +0000 UTC m=+0.443667068,LastTimestamp:2025-02-13 20:19:26.076197184 +0000 UTC m=+0.443667068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-6-670b8c47e7,}" Feb 13 20:19:33.021601 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:19:33.021930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:33.028345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:19:33.192751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:19:33.207454 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:19:33.315223 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:19:33.315967 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:19:33.315967 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:19:33.318875 kubelet[2579]: I0213 20:19:33.318602 2579 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:19:33.329652 kubelet[2579]: I0213 20:19:33.329607 2579 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:19:33.329652 kubelet[2579]: I0213 20:19:33.329647 2579 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:19:33.330057 kubelet[2579]: I0213 20:19:33.330034 2579 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:19:33.337857 kubelet[2579]: I0213 20:19:33.337503 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:19:33.349175 kubelet[2579]: I0213 20:19:33.349132 2579 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:19:33.357810 kubelet[2579]: I0213 20:19:33.357775 2579 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:19:33.358708 kubelet[2579]: I0213 20:19:33.358250 2579 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:19:33.358708 kubelet[2579]: I0213 20:19:33.358289 2579 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-6-670b8c47e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:19:33.358708 kubelet[2579]: I0213 20:19:33.358482 2579 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:19:33.358708 kubelet[2579]: I0213 20:19:33.358494 2579 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:19:33.359017 kubelet[2579]: I0213 20:19:33.358543 2579 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:19:33.360868 kubelet[2579]: I0213 20:19:33.359850 2579 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:19:33.360868 kubelet[2579]: I0213 20:19:33.359876 2579 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:19:33.360868 kubelet[2579]: I0213 20:19:33.359905 2579 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:19:33.360868 kubelet[2579]: I0213 20:19:33.359925 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:19:33.364033 kubelet[2579]: I0213 20:19:33.363990 2579 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:19:33.365928 kubelet[2579]: I0213 20:19:33.365769 2579 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:19:33.366404 kubelet[2579]: I0213 20:19:33.366384 2579 server.go:1264] "Started kubelet" Feb 13 20:19:33.369251 kubelet[2579]: I0213 20:19:33.368781 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:19:33.386268 kubelet[2579]: I0213 20:19:33.386207 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:19:33.387378 kubelet[2579]: I0213 20:19:33.387342 2579 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:19:33.388052 kubelet[2579]: I0213 20:19:33.387919 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:19:33.388218 kubelet[2579]: I0213 20:19:33.388202 2579 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:19:33.393621 kubelet[2579]: I0213 20:19:33.393583 2579 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:19:33.395358 kubelet[2579]: I0213 20:19:33.395325 2579 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:19:33.396092 kubelet[2579]: I0213 20:19:33.395485 2579 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:19:33.407367 kubelet[2579]: I0213 20:19:33.407321 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:19:33.409872 kubelet[2579]: I0213 20:19:33.409810 2579 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:19:33.410240 kubelet[2579]: I0213 20:19:33.409968 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:19:33.413542 kubelet[2579]: I0213 20:19:33.413277 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:19:33.413542 kubelet[2579]: I0213 20:19:33.413350 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:19:33.413542 kubelet[2579]: I0213 20:19:33.413407 2579 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:19:33.413542 kubelet[2579]: E0213 20:19:33.413496 2579 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:19:33.420792 kubelet[2579]: I0213 20:19:33.420719 2579 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:19:33.422339 kubelet[2579]: E0213 20:19:33.421971 2579 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:19:33.483529 kubelet[2579]: I0213 20:19:33.483425 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:19:33.483694 kubelet[2579]: I0213 20:19:33.483680 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:19:33.484023 kubelet[2579]: I0213 20:19:33.484000 2579 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:19:33.484667 kubelet[2579]: I0213 20:19:33.484465 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:19:33.484947 kubelet[2579]: I0213 20:19:33.484801 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:19:33.484947 kubelet[2579]: I0213 20:19:33.484890 2579 policy_none.go:49] "None policy: Start" Feb 13 20:19:33.486890 kubelet[2579]: I0213 20:19:33.486098 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:19:33.486890 kubelet[2579]: I0213 20:19:33.486135 2579 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:19:33.486890 kubelet[2579]: I0213 20:19:33.486366 2579 state_mem.go:75] "Updated machine memory state" Feb 13 20:19:33.492883 kubelet[2579]: I0213 20:19:33.492849 2579 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:19:33.493158 kubelet[2579]: I0213 20:19:33.493112 2579 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:19:33.493299 kubelet[2579]: I0213 20:19:33.493283 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:19:33.497589 kubelet[2579]: I0213 20:19:33.497555 2579 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.515605 kubelet[2579]: I0213 20:19:33.514650 2579 topology_manager.go:215] "Topology Admit Handler" podUID="fbed958039158594c157c7d0372e5d6f" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.515605 kubelet[2579]: I0213 20:19:33.514879 2579 topology_manager.go:215] "Topology Admit Handler" podUID="aea61bbb9df9969305b5579ffc02b3c3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.515605 kubelet[2579]: I0213 20:19:33.514975 2579 topology_manager.go:215] "Topology Admit Handler" podUID="f44d5b855e98264e6436eada9e03c576" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.543024 kubelet[2579]: I0213 20:19:33.542962 2579 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.544559 kubelet[2579]: I0213 20:19:33.544517 2579 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.547285 kubelet[2579]: W0213 20:19:33.547229 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:19:33.562899 kubelet[2579]: W0213 20:19:33.562741 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:19:33.563130 kubelet[2579]: W0213 20:19:33.563072 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:19:33.697598 kubelet[2579]: I0213 20:19:33.697163 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697598 kubelet[2579]: I0213 20:19:33.697238 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697598 kubelet[2579]: I0213 20:19:33.697271 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697598 kubelet[2579]: I0213 20:19:33.697299 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697598 kubelet[2579]: I0213 20:19:33.697328 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697992 kubelet[2579]: I0213 20:19:33.697355 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697992 kubelet[2579]: I0213 20:19:33.697379 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f44d5b855e98264e6436eada9e03c576-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-6-670b8c47e7\" (UID: \"f44d5b855e98264e6436eada9e03c576\") " pod="kube-system/kube-scheduler-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697992 kubelet[2579]: I0213 20:19:33.697400 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbed958039158594c157c7d0372e5d6f-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" (UID: \"fbed958039158594c157c7d0372e5d6f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.697992 kubelet[2579]: I0213 20:19:33.697423 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea61bbb9df9969305b5579ffc02b3c3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-670b8c47e7\" (UID: \"aea61bbb9df9969305b5579ffc02b3c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:33.849881 kubelet[2579]: E0213 20:19:33.849413 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:33.863858 kubelet[2579]: E0213 20:19:33.863498 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:33.864498 kubelet[2579]: E0213 20:19:33.864409 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:34.361986 kubelet[2579]: I0213 20:19:34.361625 2579 apiserver.go:52] "Watching apiserver" Feb 13 20:19:34.397174 kubelet[2579]: I0213 20:19:34.397099 2579 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:19:34.447644 kubelet[2579]: E0213 20:19:34.447599 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:34.453966 kubelet[2579]: E0213 20:19:34.451612 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:34.474848 kubelet[2579]: W0213 20:19:34.472613 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:19:34.474848 kubelet[2579]: E0213 20:19:34.472723 2579 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-6-670b8c47e7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" Feb 13 20:19:34.477158 kubelet[2579]: E0213 20:19:34.476986 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:34.512966 kubelet[2579]: I0213 20:19:34.512351 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-6-670b8c47e7" podStartSLOduration=1.512304813 podStartE2EDuration="1.512304813s" podCreationTimestamp="2025-02-13 20:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:19:34.509012096 +0000 UTC m=+1.292090456" watchObservedRunningTime="2025-02-13 20:19:34.512304813 +0000 UTC m=+1.295383164" Feb 13 20:19:34.559527 kubelet[2579]: I0213 20:19:34.559332 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-6-670b8c47e7" podStartSLOduration=1.5593132459999999 podStartE2EDuration="1.559313246s" podCreationTimestamp="2025-02-13 20:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:19:34.533277793 +0000 UTC m=+1.316356121" watchObservedRunningTime="2025-02-13 20:19:34.559313246 +0000 UTC m=+1.342391593" Feb 13 20:19:34.559758 kubelet[2579]: I0213 20:19:34.559692 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-6-670b8c47e7" podStartSLOduration=1.559662741 podStartE2EDuration="1.559662741s" podCreationTimestamp="2025-02-13 20:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:19:34.559029363 +0000 UTC m=+1.342107710" watchObservedRunningTime="2025-02-13 20:19:34.559662741 +0000 UTC m=+1.342741090" Feb 13 20:19:35.450903 kubelet[2579]: E0213 20:19:35.450291 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:35.451379 kubelet[2579]: E0213 20:19:35.450919 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:36.447808 kubelet[2579]: E0213 20:19:36.447701 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:38.012995 systemd-resolved[1321]: Clock change detected. Flushing caches. Feb 13 20:19:38.013854 systemd-timesyncd[1343]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Feb 13 20:19:38.013938 systemd-timesyncd[1343]: Initial clock synchronization to Thu 2025-02-13 20:19:38.012874 UTC. Feb 13 20:19:38.553433 sudo[1645]: pam_unix(sudo:session): session closed for user root Feb 13 20:19:38.558809 sshd[1642]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:38.567778 systemd[1]: sshd@6-165.232.153.54:22-147.75.109.163:47774.service: Deactivated successfully. Feb 13 20:19:38.571958 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:19:38.572581 systemd[1]: session-7.scope: Consumed 6.282s CPU time, 188.1M memory peak, 0B memory swap peak. Feb 13 20:19:38.573733 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:19:38.576085 systemd-logind[1441]: Removed session 7. Feb 13 20:19:38.923375 kubelet[2579]: E0213 20:19:38.922493 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:38.963419 kubelet[2579]: E0213 20:19:38.963378 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:45.020397 update_engine[1442]: I20250213 20:19:45.019922 1442 update_attempter.cc:509] Updating boot flags... Feb 13 20:19:45.056482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2663) Feb 13 20:19:45.120530 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2661) Feb 13 20:19:45.177408 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2661) Feb 13 20:19:45.598228 kubelet[2579]: E0213 20:19:45.598188 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:46.962655 kubelet[2579]: E0213 20:19:46.962609 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:47.391366 kubelet[2579]: I0213 20:19:47.391210 2579 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:19:47.393119 containerd[1469]: time="2025-02-13T20:19:47.393064162Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:19:47.394363 kubelet[2579]: I0213 20:19:47.393888 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:19:48.012295 kubelet[2579]: I0213 20:19:48.012240 2579 topology_manager.go:215] "Topology Admit Handler" podUID="82242730-2a22-4cf9-9701-f323e61859f2" podNamespace="kube-system" podName="kube-proxy-l48p4" Feb 13 20:19:48.033524 systemd[1]: Created slice kubepods-besteffort-pod82242730_2a22_4cf9_9701_f323e61859f2.slice - libcontainer container kubepods-besteffort-pod82242730_2a22_4cf9_9701_f323e61859f2.slice. Feb 13 20:19:48.102590 kubelet[2579]: I0213 20:19:48.102164 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82242730-2a22-4cf9-9701-f323e61859f2-xtables-lock\") pod \"kube-proxy-l48p4\" (UID: \"82242730-2a22-4cf9-9701-f323e61859f2\") " pod="kube-system/kube-proxy-l48p4" Feb 13 20:19:48.102590 kubelet[2579]: I0213 20:19:48.102234 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82242730-2a22-4cf9-9701-f323e61859f2-lib-modules\") pod \"kube-proxy-l48p4\" (UID: \"82242730-2a22-4cf9-9701-f323e61859f2\") " pod="kube-system/kube-proxy-l48p4" Feb 13 20:19:48.102590 kubelet[2579]: I0213 20:19:48.102294 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82242730-2a22-4cf9-9701-f323e61859f2-kube-proxy\") pod \"kube-proxy-l48p4\" (UID: \"82242730-2a22-4cf9-9701-f323e61859f2\") " pod="kube-system/kube-proxy-l48p4" Feb 13 20:19:48.102590 kubelet[2579]: I0213 20:19:48.102356 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbpqq\" (UniqueName: \"kubernetes.io/projected/82242730-2a22-4cf9-9701-f323e61859f2-kube-api-access-xbpqq\") pod \"kube-proxy-l48p4\" (UID: \"82242730-2a22-4cf9-9701-f323e61859f2\") " pod="kube-system/kube-proxy-l48p4" Feb 13 20:19:48.213959 kubelet[2579]: E0213 20:19:48.213906 2579 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:19:48.213959 kubelet[2579]: E0213 20:19:48.213972 2579 projected.go:200] Error preparing data for projected volume kube-api-access-xbpqq for pod kube-system/kube-proxy-l48p4: configmap "kube-root-ca.crt" not found Feb 13 20:19:48.214273 kubelet[2579]: E0213 20:19:48.214080 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82242730-2a22-4cf9-9701-f323e61859f2-kube-api-access-xbpqq podName:82242730-2a22-4cf9-9701-f323e61859f2 nodeName:}" failed. No retries permitted until 2025-02-13 20:19:48.71405019 +0000 UTC m=+14.990643078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xbpqq" (UniqueName: "kubernetes.io/projected/82242730-2a22-4cf9-9701-f323e61859f2-kube-api-access-xbpqq") pod "kube-proxy-l48p4" (UID: "82242730-2a22-4cf9-9701-f323e61859f2") : configmap "kube-root-ca.crt" not found Feb 13 20:19:48.484892 kubelet[2579]: I0213 20:19:48.484590 2579 topology_manager.go:215] "Topology Admit Handler" podUID="ed430593-1323-4694-bcf3-3b9e3cd90a07" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8svjz" Feb 13 20:19:48.496226 systemd[1]: Created slice kubepods-besteffort-poded430593_1323_4694_bcf3_3b9e3cd90a07.slice - libcontainer container kubepods-besteffort-poded430593_1323_4694_bcf3_3b9e3cd90a07.slice. Feb 13 20:19:48.505033 kubelet[2579]: I0213 20:19:48.504887 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ed430593-1323-4694-bcf3-3b9e3cd90a07-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8svjz\" (UID: \"ed430593-1323-4694-bcf3-3b9e3cd90a07\") " pod="tigera-operator/tigera-operator-7bc55997bb-8svjz" Feb 13 20:19:48.505033 kubelet[2579]: I0213 20:19:48.504944 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-685wb\" (UniqueName: \"kubernetes.io/projected/ed430593-1323-4694-bcf3-3b9e3cd90a07-kube-api-access-685wb\") pod \"tigera-operator-7bc55997bb-8svjz\" (UID: \"ed430593-1323-4694-bcf3-3b9e3cd90a07\") " pod="tigera-operator/tigera-operator-7bc55997bb-8svjz" Feb 13 20:19:48.801900 containerd[1469]: time="2025-02-13T20:19:48.801471548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8svjz,Uid:ed430593-1323-4694-bcf3-3b9e3cd90a07,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:19:48.840336 containerd[1469]: time="2025-02-13T20:19:48.840172484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:48.840336 containerd[1469]: time="2025-02-13T20:19:48.840233828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:48.840336 containerd[1469]: time="2025-02-13T20:19:48.840248967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:48.840665 containerd[1469]: time="2025-02-13T20:19:48.840414096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:48.866569 systemd[1]: Started cri-containerd-964e28abbd9ff5100f00ed29e8af7224b5e13b261ba9acee2c215b2814bd140d.scope - libcontainer container 964e28abbd9ff5100f00ed29e8af7224b5e13b261ba9acee2c215b2814bd140d. Feb 13 20:19:48.912432 containerd[1469]: time="2025-02-13T20:19:48.912234303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8svjz,Uid:ed430593-1323-4694-bcf3-3b9e3cd90a07,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"964e28abbd9ff5100f00ed29e8af7224b5e13b261ba9acee2c215b2814bd140d\"" Feb 13 20:19:48.916124 containerd[1469]: time="2025-02-13T20:19:48.916089006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:19:48.948671 kubelet[2579]: E0213 20:19:48.948622 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:48.949352 containerd[1469]: time="2025-02-13T20:19:48.949215936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l48p4,Uid:82242730-2a22-4cf9-9701-f323e61859f2,Namespace:kube-system,Attempt:0,}" Feb 13 20:19:48.976278 containerd[1469]: time="2025-02-13T20:19:48.976158594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:48.976278 containerd[1469]: time="2025-02-13T20:19:48.976222708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:48.976278 containerd[1469]: time="2025-02-13T20:19:48.976253693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:48.977146 containerd[1469]: time="2025-02-13T20:19:48.976851833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:49.000623 systemd[1]: Started cri-containerd-b207d3d5f3b7647bd1addfdfc655f61ae4bcac395af58044ca4f1ff6667f454f.scope - libcontainer container b207d3d5f3b7647bd1addfdfc655f61ae4bcac395af58044ca4f1ff6667f454f. Feb 13 20:19:49.032840 containerd[1469]: time="2025-02-13T20:19:49.032779595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l48p4,Uid:82242730-2a22-4cf9-9701-f323e61859f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b207d3d5f3b7647bd1addfdfc655f61ae4bcac395af58044ca4f1ff6667f454f\"" Feb 13 20:19:49.033674 kubelet[2579]: E0213 20:19:49.033646 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:49.038568 containerd[1469]: time="2025-02-13T20:19:49.038503689Z" level=info msg="CreateContainer within sandbox \"b207d3d5f3b7647bd1addfdfc655f61ae4bcac395af58044ca4f1ff6667f454f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:19:49.055645 containerd[1469]: time="2025-02-13T20:19:49.055490704Z" level=info msg="CreateContainer within sandbox \"b207d3d5f3b7647bd1addfdfc655f61ae4bcac395af58044ca4f1ff6667f454f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee1f077ab51ee836be040c1fe3b901e1dea704d741078cc13fc99565cfb16523\"" Feb 13 20:19:49.058073 containerd[1469]: time="2025-02-13T20:19:49.057477098Z" level=info msg="StartContainer for \"ee1f077ab51ee836be040c1fe3b901e1dea704d741078cc13fc99565cfb16523\"" Feb 13 20:19:49.094600 systemd[1]: Started cri-containerd-ee1f077ab51ee836be040c1fe3b901e1dea704d741078cc13fc99565cfb16523.scope - libcontainer container ee1f077ab51ee836be040c1fe3b901e1dea704d741078cc13fc99565cfb16523. Feb 13 20:19:49.131143 containerd[1469]: time="2025-02-13T20:19:49.130899713Z" level=info msg="StartContainer for \"ee1f077ab51ee836be040c1fe3b901e1dea704d741078cc13fc99565cfb16523\" returns successfully" Feb 13 20:19:49.990214 kubelet[2579]: E0213 20:19:49.990173 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:50.012118 kubelet[2579]: I0213 20:19:50.012038 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l48p4" podStartSLOduration=3.012012145 podStartE2EDuration="3.012012145s" podCreationTimestamp="2025-02-13 20:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:19:50.011760813 +0000 UTC m=+16.288353697" watchObservedRunningTime="2025-02-13 20:19:50.012012145 +0000 UTC m=+16.288605029" Feb 13 20:19:51.424755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737583943.mount: Deactivated successfully. Feb 13 20:19:52.035257 containerd[1469]: time="2025-02-13T20:19:52.035198088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:52.036752 containerd[1469]: time="2025-02-13T20:19:52.036696975Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:19:52.037003 containerd[1469]: time="2025-02-13T20:19:52.036878092Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:52.039543 containerd[1469]: time="2025-02-13T20:19:52.039490714Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:52.040327 containerd[1469]: time="2025-02-13T20:19:52.040255388Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.124127798s" Feb 13 20:19:52.040327 containerd[1469]: time="2025-02-13T20:19:52.040293416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:19:52.044919 containerd[1469]: time="2025-02-13T20:19:52.044640184Z" level=info msg="CreateContainer within sandbox \"964e28abbd9ff5100f00ed29e8af7224b5e13b261ba9acee2c215b2814bd140d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:19:52.064345 containerd[1469]: time="2025-02-13T20:19:52.062933600Z" level=info msg="CreateContainer within sandbox \"964e28abbd9ff5100f00ed29e8af7224b5e13b261ba9acee2c215b2814bd140d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6f637bccea660f4611a50742350ed3380108a9348e1a8d681403f4cb3114778f\"" Feb 13 20:19:52.063731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618065566.mount: Deactivated successfully. Feb 13 20:19:52.065474 containerd[1469]: time="2025-02-13T20:19:52.065407629Z" level=info msg="StartContainer for \"6f637bccea660f4611a50742350ed3380108a9348e1a8d681403f4cb3114778f\"" Feb 13 20:19:52.105669 systemd[1]: Started cri-containerd-6f637bccea660f4611a50742350ed3380108a9348e1a8d681403f4cb3114778f.scope - libcontainer container 6f637bccea660f4611a50742350ed3380108a9348e1a8d681403f4cb3114778f. Feb 13 20:19:52.142384 containerd[1469]: time="2025-02-13T20:19:52.142337891Z" level=info msg="StartContainer for \"6f637bccea660f4611a50742350ed3380108a9348e1a8d681403f4cb3114778f\" returns successfully" Feb 13 20:19:53.023807 kubelet[2579]: I0213 20:19:53.023729 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8svjz" podStartSLOduration=1.896066511 podStartE2EDuration="5.023705917s" podCreationTimestamp="2025-02-13 20:19:48 +0000 UTC" firstStartedPulling="2025-02-13 20:19:48.913974277 +0000 UTC m=+15.190567140" lastFinishedPulling="2025-02-13 20:19:52.041613681 +0000 UTC m=+18.318206546" observedRunningTime="2025-02-13 20:19:53.022212102 +0000 UTC m=+19.298804985" watchObservedRunningTime="2025-02-13 20:19:53.023705917 +0000 UTC m=+19.300298799" Feb 13 20:19:55.588047 kubelet[2579]: I0213 20:19:55.587914 2579 topology_manager.go:215] "Topology Admit Handler" podUID="658449cc-7959-4012-af75-2a4bcfb174e4" podNamespace="calico-system" podName="calico-typha-5b66d7dbc6-chq7s" Feb 13 20:19:55.598543 systemd[1]: Created slice kubepods-besteffort-pod658449cc_7959_4012_af75_2a4bcfb174e4.slice - libcontainer container kubepods-besteffort-pod658449cc_7959_4012_af75_2a4bcfb174e4.slice. Feb 13 20:19:55.737221 kubelet[2579]: I0213 20:19:55.735990 2579 topology_manager.go:215] "Topology Admit Handler" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" podNamespace="calico-system" podName="calico-node-rj7zk" Feb 13 20:19:55.748570 systemd[1]: Created slice kubepods-besteffort-pod39184bb0_cb2d_427c_beb7_c5eeacb43ad1.slice - libcontainer container kubepods-besteffort-pod39184bb0_cb2d_427c_beb7_c5eeacb43ad1.slice. Feb 13 20:19:55.755978 kubelet[2579]: I0213 20:19:55.755940 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/658449cc-7959-4012-af75-2a4bcfb174e4-tigera-ca-bundle\") pod \"calico-typha-5b66d7dbc6-chq7s\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " pod="calico-system/calico-typha-5b66d7dbc6-chq7s" Feb 13 20:19:55.756586 kubelet[2579]: I0213 20:19:55.756472 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/658449cc-7959-4012-af75-2a4bcfb174e4-typha-certs\") pod \"calico-typha-5b66d7dbc6-chq7s\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " pod="calico-system/calico-typha-5b66d7dbc6-chq7s" Feb 13 20:19:55.756879 kubelet[2579]: I0213 20:19:55.756751 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72db\" (UniqueName: \"kubernetes.io/projected/658449cc-7959-4012-af75-2a4bcfb174e4-kube-api-access-q72db\") pod \"calico-typha-5b66d7dbc6-chq7s\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " pod="calico-system/calico-typha-5b66d7dbc6-chq7s" Feb 13 20:19:55.857564 kubelet[2579]: I0213 20:19:55.857209 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpvj\" (UniqueName: \"kubernetes.io/projected/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-kube-api-access-klpvj\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857564 kubelet[2579]: I0213 20:19:55.857277 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-node-certs\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857564 kubelet[2579]: I0213 20:19:55.857360 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-policysync\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857564 kubelet[2579]: I0213 20:19:55.857387 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-run-calico\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857564 kubelet[2579]: I0213 20:19:55.857416 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-bin-dir\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857880 kubelet[2579]: I0213 20:19:55.857442 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-xtables-lock\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857880 kubelet[2579]: I0213 20:19:55.857462 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-net-dir\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857880 kubelet[2579]: I0213 20:19:55.857483 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-log-dir\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857880 kubelet[2579]: I0213 20:19:55.857513 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-lib-modules\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.857880 kubelet[2579]: I0213 20:19:55.857563 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-lib-calico\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.860413 kubelet[2579]: I0213 20:19:55.859299 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-tigera-ca-bundle\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.860413 kubelet[2579]: I0213 20:19:55.859498 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-flexvol-driver-host\") pod \"calico-node-rj7zk\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " pod="calico-system/calico-node-rj7zk" Feb 13 20:19:55.888463 kubelet[2579]: I0213 20:19:55.887689 2579 topology_manager.go:215] "Topology Admit Handler" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" podNamespace="calico-system" podName="csi-node-driver-ws7h9" Feb 13 20:19:55.888463 kubelet[2579]: E0213 20:19:55.888025 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:19:55.904693 kubelet[2579]: E0213 20:19:55.904658 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:55.906837 containerd[1469]: time="2025-02-13T20:19:55.906034210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b66d7dbc6-chq7s,Uid:658449cc-7959-4012-af75-2a4bcfb174e4,Namespace:calico-system,Attempt:0,}" Feb 13 20:19:55.960279 kubelet[2579]: I0213 20:19:55.960192 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972bx\" (UniqueName: \"kubernetes.io/projected/22ae9f28-0bd2-4232-81cb-1eee6e72e721-kube-api-access-972bx\") pod \"csi-node-driver-ws7h9\" (UID: \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\") " pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:19:55.960279 kubelet[2579]: I0213 20:19:55.960251 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22ae9f28-0bd2-4232-81cb-1eee6e72e721-varrun\") pod \"csi-node-driver-ws7h9\" (UID: \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\") " pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:19:55.962644 kubelet[2579]: I0213 20:19:55.962492 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22ae9f28-0bd2-4232-81cb-1eee6e72e721-kubelet-dir\") pod \"csi-node-driver-ws7h9\" (UID: \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\") " pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:19:55.962914 kubelet[2579]: I0213 20:19:55.962755 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22ae9f28-0bd2-4232-81cb-1eee6e72e721-socket-dir\") pod \"csi-node-driver-ws7h9\" (UID: \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\") " pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:19:55.962914 kubelet[2579]: I0213 20:19:55.962798 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22ae9f28-0bd2-4232-81cb-1eee6e72e721-registration-dir\") pod \"csi-node-driver-ws7h9\" (UID: \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\") " pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:19:55.977933 containerd[1469]: time="2025-02-13T20:19:55.975464984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:55.977933 containerd[1469]: time="2025-02-13T20:19:55.975560954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:55.977933 containerd[1469]: time="2025-02-13T20:19:55.975594964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:55.977933 containerd[1469]: time="2025-02-13T20:19:55.975772373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:55.978783 kubelet[2579]: E0213 20:19:55.978714 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:55.979238 kubelet[2579]: W0213 20:19:55.978931 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:55.979238 kubelet[2579]: E0213 20:19:55.978981 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.000628 kubelet[2579]: E0213 20:19:56.000571 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.000628 kubelet[2579]: W0213 20:19:56.000614 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.000826 kubelet[2579]: E0213 20:19:56.000651 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.013125 kubelet[2579]: E0213 20:19:56.013072 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.013125 kubelet[2579]: W0213 20:19:56.013118 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.013435 kubelet[2579]: E0213 20:19:56.013222 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.045653 systemd[1]: Started cri-containerd-8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf.scope - libcontainer container 8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf. Feb 13 20:19:56.054595 kubelet[2579]: E0213 20:19:56.054520 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:56.057061 containerd[1469]: time="2025-02-13T20:19:56.055450775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rj7zk,Uid:39184bb0-cb2d-427c-beb7-c5eeacb43ad1,Namespace:calico-system,Attempt:0,}" Feb 13 20:19:56.064744 kubelet[2579]: E0213 20:19:56.064499 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.064744 kubelet[2579]: W0213 20:19:56.064527 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.064744 kubelet[2579]: E0213 20:19:56.064563 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.065352 kubelet[2579]: E0213 20:19:56.065065 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.065352 kubelet[2579]: W0213 20:19:56.065091 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.065352 kubelet[2579]: E0213 20:19:56.065114 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.065750 kubelet[2579]: E0213 20:19:56.065710 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.065750 kubelet[2579]: W0213 20:19:56.065736 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.065919 kubelet[2579]: E0213 20:19:56.065769 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.066073 kubelet[2579]: E0213 20:19:56.066054 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.066118 kubelet[2579]: W0213 20:19:56.066102 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.066276 kubelet[2579]: E0213 20:19:56.066126 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.067509 kubelet[2579]: E0213 20:19:56.066558 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.067509 kubelet[2579]: W0213 20:19:56.066574 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.067509 kubelet[2579]: E0213 20:19:56.066590 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.068662 kubelet[2579]: E0213 20:19:56.068632 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.068662 kubelet[2579]: W0213 20:19:56.068657 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.069065 kubelet[2579]: E0213 20:19:56.068851 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.069103 kubelet[2579]: E0213 20:19:56.069083 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.069103 kubelet[2579]: W0213 20:19:56.069094 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.069919 kubelet[2579]: E0213 20:19:56.069282 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.070004 kubelet[2579]: E0213 20:19:56.069918 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.070004 kubelet[2579]: W0213 20:19:56.069935 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.070066 kubelet[2579]: E0213 20:19:56.070014 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.070461 kubelet[2579]: E0213 20:19:56.070414 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.070461 kubelet[2579]: W0213 20:19:56.070451 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.070966 kubelet[2579]: E0213 20:19:56.070479 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.070966 kubelet[2579]: E0213 20:19:56.070882 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.070966 kubelet[2579]: W0213 20:19:56.070896 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.072402 kubelet[2579]: E0213 20:19:56.072189 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.072571 kubelet[2579]: E0213 20:19:56.072467 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.072571 kubelet[2579]: W0213 20:19:56.072482 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.072666 kubelet[2579]: E0213 20:19:56.072628 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.073008 kubelet[2579]: E0213 20:19:56.072987 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.073008 kubelet[2579]: W0213 20:19:56.073006 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.073341 kubelet[2579]: E0213 20:19:56.073217 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.076511 kubelet[2579]: E0213 20:19:56.076468 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.076511 kubelet[2579]: W0213 20:19:56.076498 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.076859 kubelet[2579]: E0213 20:19:56.076666 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.077151 kubelet[2579]: E0213 20:19:56.077129 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.077272 kubelet[2579]: W0213 20:19:56.077147 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.077422 kubelet[2579]: E0213 20:19:56.077296 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.077678 kubelet[2579]: E0213 20:19:56.077654 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.077678 kubelet[2579]: W0213 20:19:56.077674 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.077861 kubelet[2579]: E0213 20:19:56.077806 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.078145 kubelet[2579]: E0213 20:19:56.078124 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.078145 kubelet[2579]: W0213 20:19:56.078143 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.078411 kubelet[2579]: E0213 20:19:56.078272 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.080739 kubelet[2579]: E0213 20:19:56.080703 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.080739 kubelet[2579]: W0213 20:19:56.080740 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.081016 kubelet[2579]: E0213 20:19:56.080937 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.082957 kubelet[2579]: E0213 20:19:56.082544 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.082957 kubelet[2579]: W0213 20:19:56.082570 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.083658 kubelet[2579]: E0213 20:19:56.083159 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.084769 kubelet[2579]: E0213 20:19:56.084741 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.084769 kubelet[2579]: W0213 20:19:56.084764 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.085182 kubelet[2579]: E0213 20:19:56.085150 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.086651 kubelet[2579]: E0213 20:19:56.086361 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.086651 kubelet[2579]: W0213 20:19:56.086388 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.086651 kubelet[2579]: E0213 20:19:56.086497 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.087833 kubelet[2579]: E0213 20:19:56.087373 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.087833 kubelet[2579]: W0213 20:19:56.087398 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.087833 kubelet[2579]: E0213 20:19:56.087688 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.087833 kubelet[2579]: W0213 20:19:56.087702 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.088724 kubelet[2579]: E0213 20:19:56.088344 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.088724 kubelet[2579]: E0213 20:19:56.088426 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.088724 kubelet[2579]: E0213 20:19:56.088712 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.093363 kubelet[2579]: W0213 20:19:56.088736 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.093363 kubelet[2579]: E0213 20:19:56.088763 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.093363 kubelet[2579]: E0213 20:19:56.089338 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.093363 kubelet[2579]: W0213 20:19:56.089351 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.093363 kubelet[2579]: E0213 20:19:56.089381 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.093363 kubelet[2579]: E0213 20:19:56.090684 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.093363 kubelet[2579]: W0213 20:19:56.090704 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.093363 kubelet[2579]: E0213 20:19:56.090726 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.204657 kubelet[2579]: E0213 20:19:56.204432 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.204657 kubelet[2579]: W0213 20:19:56.204466 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.204657 kubelet[2579]: E0213 20:19:56.204498 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.220413 kubelet[2579]: E0213 20:19:56.217774 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:56.220413 kubelet[2579]: W0213 20:19:56.217802 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:56.220413 kubelet[2579]: E0213 20:19:56.217832 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:56.242777 containerd[1469]: time="2025-02-13T20:19:56.241822511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:19:56.245129 containerd[1469]: time="2025-02-13T20:19:56.242725545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:19:56.245381 containerd[1469]: time="2025-02-13T20:19:56.244540265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:56.245381 containerd[1469]: time="2025-02-13T20:19:56.244685787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:19:56.297655 systemd[1]: Started cri-containerd-30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a.scope - libcontainer container 30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a. Feb 13 20:19:56.327606 containerd[1469]: time="2025-02-13T20:19:56.327515076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b66d7dbc6-chq7s,Uid:658449cc-7959-4012-af75-2a4bcfb174e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\"" Feb 13 20:19:56.331399 kubelet[2579]: E0213 20:19:56.330014 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:56.333942 containerd[1469]: time="2025-02-13T20:19:56.333893260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:19:56.375415 containerd[1469]: time="2025-02-13T20:19:56.374912269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rj7zk,Uid:39184bb0-cb2d-427c-beb7-c5eeacb43ad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\"" Feb 13 20:19:56.377670 kubelet[2579]: E0213 20:19:56.377546 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:57.854768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646039466.mount: Deactivated successfully. Feb 13 20:19:57.921654 kubelet[2579]: E0213 20:19:57.921597 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:19:58.717398 containerd[1469]: time="2025-02-13T20:19:58.716063358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:19:58.717398 containerd[1469]: time="2025-02-13T20:19:58.716537027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:58.721420 containerd[1469]: time="2025-02-13T20:19:58.721373378Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:58.725047 containerd[1469]: time="2025-02-13T20:19:58.724996241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:19:58.726250 containerd[1469]: time="2025-02-13T20:19:58.726161517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.392213874s" Feb 13 20:19:58.726577 containerd[1469]: time="2025-02-13T20:19:58.726449909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:19:58.729596 containerd[1469]: time="2025-02-13T20:19:58.729563689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:19:58.751365 containerd[1469]: time="2025-02-13T20:19:58.751157366Z" level=info msg="CreateContainer within sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:19:58.777737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907626023.mount: Deactivated successfully. Feb 13 20:19:58.785070 containerd[1469]: time="2025-02-13T20:19:58.785001219Z" level=info msg="CreateContainer within sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\"" Feb 13 20:19:58.786836 containerd[1469]: time="2025-02-13T20:19:58.786790582Z" level=info msg="StartContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\"" Feb 13 20:19:58.835639 systemd[1]: Started cri-containerd-d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558.scope - libcontainer container d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558. Feb 13 20:19:58.920130 containerd[1469]: time="2025-02-13T20:19:58.919851729Z" level=info msg="StartContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" returns successfully" Feb 13 20:19:59.038549 kubelet[2579]: E0213 20:19:59.038477 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:19:59.086246 kubelet[2579]: E0213 20:19:59.086190 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.086246 kubelet[2579]: W0213 20:19:59.086230 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.086246 kubelet[2579]: E0213 20:19:59.086263 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.087625 kubelet[2579]: E0213 20:19:59.087386 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.087625 kubelet[2579]: W0213 20:19:59.087414 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.087625 kubelet[2579]: E0213 20:19:59.087443 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.088285 kubelet[2579]: E0213 20:19:59.087749 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.088285 kubelet[2579]: W0213 20:19:59.087779 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.088285 kubelet[2579]: E0213 20:19:59.087799 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.088285 kubelet[2579]: E0213 20:19:59.088190 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.088285 kubelet[2579]: W0213 20:19:59.088206 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.088285 kubelet[2579]: E0213 20:19:59.088221 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.089612 kubelet[2579]: E0213 20:19:59.089572 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.089612 kubelet[2579]: W0213 20:19:59.089596 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.089619 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.089938 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.090046 kubelet[2579]: W0213 20:19:59.089952 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.089968 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.090556 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.090046 kubelet[2579]: W0213 20:19:59.090573 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.090601 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.090900 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.090046 kubelet[2579]: W0213 20:19:59.090915 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.090046 kubelet[2579]: E0213 20:19:59.090932 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.091378 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.093157 kubelet[2579]: W0213 20:19:59.091391 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.091427 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.091634 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.093157 kubelet[2579]: W0213 20:19:59.091648 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.091664 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.092487 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.093157 kubelet[2579]: W0213 20:19:59.092502 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.092515 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.093157 kubelet[2579]: E0213 20:19:59.092892 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.093837 kubelet[2579]: W0213 20:19:59.092905 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.093837 kubelet[2579]: E0213 20:19:59.092922 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.094410 kubelet[2579]: E0213 20:19:59.094383 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.094410 kubelet[2579]: W0213 20:19:59.094405 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.094746 kubelet[2579]: E0213 20:19:59.094421 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.094801 kubelet[2579]: E0213 20:19:59.094746 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.094801 kubelet[2579]: W0213 20:19:59.094760 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.094801 kubelet[2579]: E0213 20:19:59.094775 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.095122 kubelet[2579]: E0213 20:19:59.095081 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.095122 kubelet[2579]: W0213 20:19:59.095100 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.095122 kubelet[2579]: E0213 20:19:59.095116 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.128991 kubelet[2579]: E0213 20:19:59.128948 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.128991 kubelet[2579]: W0213 20:19:59.128976 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.129233 kubelet[2579]: E0213 20:19:59.129003 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.129506 kubelet[2579]: E0213 20:19:59.129486 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.129506 kubelet[2579]: W0213 20:19:59.129506 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.129648 kubelet[2579]: E0213 20:19:59.129556 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.129956 kubelet[2579]: E0213 20:19:59.129931 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.129956 kubelet[2579]: W0213 20:19:59.129946 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.130055 kubelet[2579]: E0213 20:19:59.129971 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.130279 kubelet[2579]: E0213 20:19:59.130256 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.130279 kubelet[2579]: W0213 20:19:59.130272 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.130480 kubelet[2579]: E0213 20:19:59.130454 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.130733 kubelet[2579]: E0213 20:19:59.130706 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.130733 kubelet[2579]: W0213 20:19:59.130722 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.130868 kubelet[2579]: E0213 20:19:59.130739 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.131279 kubelet[2579]: E0213 20:19:59.131252 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.131279 kubelet[2579]: W0213 20:19:59.131268 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.131508 kubelet[2579]: E0213 20:19:59.131337 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.131611 kubelet[2579]: E0213 20:19:59.131593 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.131611 kubelet[2579]: W0213 20:19:59.131606 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.131698 kubelet[2579]: E0213 20:19:59.131685 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.131911 kubelet[2579]: E0213 20:19:59.131892 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.131911 kubelet[2579]: W0213 20:19:59.131904 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.132010 kubelet[2579]: E0213 20:19:59.131996 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.132477 kubelet[2579]: E0213 20:19:59.132452 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.132477 kubelet[2579]: W0213 20:19:59.132467 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.132477 kubelet[2579]: E0213 20:19:59.132483 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.132791 kubelet[2579]: E0213 20:19:59.132764 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.132791 kubelet[2579]: W0213 20:19:59.132782 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.132910 kubelet[2579]: E0213 20:19:59.132809 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.133197 kubelet[2579]: E0213 20:19:59.133164 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.133266 kubelet[2579]: W0213 20:19:59.133206 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.133908 kubelet[2579]: E0213 20:19:59.133875 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.134900 kubelet[2579]: E0213 20:19:59.134876 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.134900 kubelet[2579]: W0213 20:19:59.134894 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.135680 kubelet[2579]: E0213 20:19:59.135646 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.135680 kubelet[2579]: W0213 20:19:59.135667 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.135680 kubelet[2579]: E0213 20:19:59.135684 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.138353 kubelet[2579]: E0213 20:19:59.137804 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.140277 kubelet[2579]: E0213 20:19:59.140189 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.140277 kubelet[2579]: W0213 20:19:59.140237 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.140277 kubelet[2579]: E0213 20:19:59.140277 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.140786 kubelet[2579]: E0213 20:19:59.140723 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.140786 kubelet[2579]: W0213 20:19:59.140749 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.140786 kubelet[2579]: E0213 20:19:59.140776 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.141518 kubelet[2579]: E0213 20:19:59.141463 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.141518 kubelet[2579]: W0213 20:19:59.141485 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.141672 kubelet[2579]: E0213 20:19:59.141593 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.142957 kubelet[2579]: E0213 20:19:59.142917 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.142957 kubelet[2579]: W0213 20:19:59.142939 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.142957 kubelet[2579]: E0213 20:19:59.142962 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.144284 kubelet[2579]: E0213 20:19:59.144136 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:19:59.144284 kubelet[2579]: W0213 20:19:59.144164 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:19:59.144284 kubelet[2579]: E0213 20:19:59.144187 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:19:59.921199 kubelet[2579]: E0213 20:19:59.920768 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:00.048265 kubelet[2579]: I0213 20:20:00.048175 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:20:00.049762 kubelet[2579]: E0213 20:20:00.049716 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:00.111741 kubelet[2579]: E0213 20:20:00.109906 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.111741 kubelet[2579]: W0213 20:20:00.110374 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.111741 kubelet[2579]: E0213 20:20:00.110440 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.116377 kubelet[2579]: E0213 20:20:00.116262 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.116866 kubelet[2579]: W0213 20:20:00.116517 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.116866 kubelet[2579]: E0213 20:20:00.116671 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.119697 kubelet[2579]: E0213 20:20:00.119552 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.119697 kubelet[2579]: W0213 20:20:00.119627 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.119954 kubelet[2579]: E0213 20:20:00.119759 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.121692 kubelet[2579]: E0213 20:20:00.121416 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.122292 kubelet[2579]: W0213 20:20:00.121828 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.122292 kubelet[2579]: E0213 20:20:00.121879 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.124739 kubelet[2579]: E0213 20:20:00.124380 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.124739 kubelet[2579]: W0213 20:20:00.124568 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.124739 kubelet[2579]: E0213 20:20:00.124624 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.126283 kubelet[2579]: E0213 20:20:00.125717 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.126283 kubelet[2579]: W0213 20:20:00.125748 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.126283 kubelet[2579]: E0213 20:20:00.125784 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.128139 kubelet[2579]: E0213 20:20:00.127636 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.128139 kubelet[2579]: W0213 20:20:00.127666 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.128139 kubelet[2579]: E0213 20:20:00.127699 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.129567 kubelet[2579]: E0213 20:20:00.128798 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.129567 kubelet[2579]: W0213 20:20:00.128824 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.129567 kubelet[2579]: E0213 20:20:00.128896 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.129752 kubelet[2579]: E0213 20:20:00.129642 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.129752 kubelet[2579]: W0213 20:20:00.129670 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.129752 kubelet[2579]: E0213 20:20:00.129724 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.131216 kubelet[2579]: E0213 20:20:00.130811 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.131216 kubelet[2579]: W0213 20:20:00.130839 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.131216 kubelet[2579]: E0213 20:20:00.130870 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.132152 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.134015 kubelet[2579]: W0213 20:20:00.132179 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.132282 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.132908 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.134015 kubelet[2579]: W0213 20:20:00.132932 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.132984 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.133435 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.134015 kubelet[2579]: W0213 20:20:00.133476 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.133498 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.134015 kubelet[2579]: E0213 20:20:00.133906 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.134465 kubelet[2579]: W0213 20:20:00.133922 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.134465 kubelet[2579]: E0213 20:20:00.133940 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.134465 kubelet[2579]: E0213 20:20:00.134227 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.134465 kubelet[2579]: W0213 20:20:00.134242 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.134465 kubelet[2579]: E0213 20:20:00.134258 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.148780 kubelet[2579]: E0213 20:20:00.147593 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.148780 kubelet[2579]: W0213 20:20:00.147642 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.148780 kubelet[2579]: E0213 20:20:00.147677 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.150025 kubelet[2579]: E0213 20:20:00.149762 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.150025 kubelet[2579]: W0213 20:20:00.149799 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.150025 kubelet[2579]: E0213 20:20:00.149844 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.151810 kubelet[2579]: E0213 20:20:00.151287 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.151810 kubelet[2579]: W0213 20:20:00.151339 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.152884 kubelet[2579]: E0213 20:20:00.152066 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.153997 kubelet[2579]: E0213 20:20:00.153602 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.153997 kubelet[2579]: W0213 20:20:00.153634 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.154727 kubelet[2579]: E0213 20:20:00.154646 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.156811 kubelet[2579]: E0213 20:20:00.156590 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.156811 kubelet[2579]: W0213 20:20:00.156620 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.156811 kubelet[2579]: E0213 20:20:00.156739 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.157672 kubelet[2579]: E0213 20:20:00.157419 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.157672 kubelet[2579]: W0213 20:20:00.157444 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.157672 kubelet[2579]: E0213 20:20:00.157502 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.158226 kubelet[2579]: E0213 20:20:00.157950 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.158226 kubelet[2579]: W0213 20:20:00.158002 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.158226 kubelet[2579]: E0213 20:20:00.158099 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.160016 kubelet[2579]: E0213 20:20:00.159678 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.160016 kubelet[2579]: W0213 20:20:00.159725 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.160016 kubelet[2579]: E0213 20:20:00.159863 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.162961 kubelet[2579]: E0213 20:20:00.161835 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.162961 kubelet[2579]: W0213 20:20:00.161869 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.162961 kubelet[2579]: E0213 20:20:00.162848 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.165847 kubelet[2579]: E0213 20:20:00.164612 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.165847 kubelet[2579]: W0213 20:20:00.164652 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.165847 kubelet[2579]: E0213 20:20:00.164908 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.166839 kubelet[2579]: E0213 20:20:00.166694 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.166839 kubelet[2579]: W0213 20:20:00.166726 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.167021 kubelet[2579]: E0213 20:20:00.166923 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.168249 kubelet[2579]: E0213 20:20:00.167701 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.168249 kubelet[2579]: W0213 20:20:00.167731 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.168249 kubelet[2579]: E0213 20:20:00.168045 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.168249 kubelet[2579]: E0213 20:20:00.168129 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.168249 kubelet[2579]: W0213 20:20:00.168147 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.168249 kubelet[2579]: E0213 20:20:00.168244 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.172040 kubelet[2579]: E0213 20:20:00.171558 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.172040 kubelet[2579]: W0213 20:20:00.171609 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.174971 kubelet[2579]: E0213 20:20:00.172291 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.174971 kubelet[2579]: E0213 20:20:00.173866 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.174971 kubelet[2579]: W0213 20:20:00.173895 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.174971 kubelet[2579]: E0213 20:20:00.173948 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.178264 kubelet[2579]: E0213 20:20:00.178174 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.178264 kubelet[2579]: W0213 20:20:00.178216 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.178794 kubelet[2579]: E0213 20:20:00.178673 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.178794 kubelet[2579]: W0213 20:20:00.178690 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.180059 kubelet[2579]: E0213 20:20:00.178953 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:20:00.180059 kubelet[2579]: W0213 20:20:00.178968 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:20:00.180059 kubelet[2579]: E0213 20:20:00.179007 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.180059 kubelet[2579]: E0213 20:20:00.179062 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.180059 kubelet[2579]: E0213 20:20:00.179089 2579 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:20:00.429411 containerd[1469]: time="2025-02-13T20:20:00.429107405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:00.432381 containerd[1469]: time="2025-02-13T20:20:00.432256225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:20:00.444470 containerd[1469]: time="2025-02-13T20:20:00.444390828Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:00.471038 containerd[1469]: time="2025-02-13T20:20:00.468600101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:00.471038 containerd[1469]: time="2025-02-13T20:20:00.470052697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.740183174s" Feb 13 20:20:00.471038 containerd[1469]: time="2025-02-13T20:20:00.470101746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:20:00.478187 containerd[1469]: time="2025-02-13T20:20:00.478094025Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:20:00.570427 containerd[1469]: time="2025-02-13T20:20:00.570275475Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\"" Feb 13 20:20:00.572538 containerd[1469]: time="2025-02-13T20:20:00.571260191Z" level=info msg="StartContainer for \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\"" Feb 13 20:20:00.760157 systemd[1]: Started cri-containerd-257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e.scope - libcontainer container 257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e. Feb 13 20:20:00.852892 containerd[1469]: time="2025-02-13T20:20:00.852822888Z" level=info msg="StartContainer for \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\" returns successfully" Feb 13 20:20:00.911843 systemd[1]: cri-containerd-257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e.scope: Deactivated successfully. Feb 13 20:20:00.994190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e-rootfs.mount: Deactivated successfully. Feb 13 20:20:01.071177 kubelet[2579]: E0213 20:20:01.069670 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:01.172476 kubelet[2579]: I0213 20:20:01.171905 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b66d7dbc6-chq7s" podStartSLOduration=3.776964813 podStartE2EDuration="6.171877149s" podCreationTimestamp="2025-02-13 20:19:55 +0000 UTC" firstStartedPulling="2025-02-13 20:19:56.333073465 +0000 UTC m=+22.609666327" lastFinishedPulling="2025-02-13 20:19:58.727985801 +0000 UTC m=+25.004578663" observedRunningTime="2025-02-13 20:19:59.081470736 +0000 UTC m=+25.358063622" watchObservedRunningTime="2025-02-13 20:20:01.171877149 +0000 UTC m=+27.448470037" Feb 13 20:20:01.235250 containerd[1469]: time="2025-02-13T20:20:01.229884391Z" level=info msg="shim disconnected" id=257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e namespace=k8s.io Feb 13 20:20:01.235250 containerd[1469]: time="2025-02-13T20:20:01.235084343Z" level=warning msg="cleaning up after shim disconnected" id=257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e namespace=k8s.io Feb 13 20:20:01.235250 containerd[1469]: time="2025-02-13T20:20:01.235112938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:01.926844 kubelet[2579]: E0213 20:20:01.925055 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:02.089111 kubelet[2579]: E0213 20:20:02.088501 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:02.096640 containerd[1469]: time="2025-02-13T20:20:02.094491217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:20:03.926905 kubelet[2579]: E0213 20:20:03.924159 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:05.922112 kubelet[2579]: E0213 20:20:05.922054 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:07.920971 kubelet[2579]: E0213 20:20:07.920723 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:08.252446 containerd[1469]: time="2025-02-13T20:20:08.250565936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:08.253417 containerd[1469]: time="2025-02-13T20:20:08.253346376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:20:08.254778 containerd[1469]: time="2025-02-13T20:20:08.254715361Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:08.259376 containerd[1469]: time="2025-02-13T20:20:08.258983759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:08.260571 containerd[1469]: time="2025-02-13T20:20:08.259851162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.165291237s" Feb 13 20:20:08.260571 containerd[1469]: time="2025-02-13T20:20:08.259888424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:20:08.265435 containerd[1469]: time="2025-02-13T20:20:08.265256720Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:20:08.298843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055711488.mount: Deactivated successfully. Feb 13 20:20:08.310782 containerd[1469]: time="2025-02-13T20:20:08.310700343Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\"" Feb 13 20:20:08.312448 containerd[1469]: time="2025-02-13T20:20:08.312397385Z" level=info msg="StartContainer for \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\"" Feb 13 20:20:08.431695 systemd[1]: Started cri-containerd-f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e.scope - libcontainer container f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e. Feb 13 20:20:08.491193 containerd[1469]: time="2025-02-13T20:20:08.490835289Z" level=info msg="StartContainer for \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\" returns successfully" Feb 13 20:20:09.173352 kubelet[2579]: E0213 20:20:09.172285 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:09.429520 systemd[1]: cri-containerd-f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e.scope: Deactivated successfully. Feb 13 20:20:09.476498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e-rootfs.mount: Deactivated successfully. Feb 13 20:20:09.483966 kubelet[2579]: I0213 20:20:09.483035 2579 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:20:09.486059 containerd[1469]: time="2025-02-13T20:20:09.485974313Z" level=info msg="shim disconnected" id=f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e namespace=k8s.io Feb 13 20:20:09.487181 containerd[1469]: time="2025-02-13T20:20:09.486636249Z" level=warning msg="cleaning up after shim disconnected" id=f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e namespace=k8s.io Feb 13 20:20:09.487181 containerd[1469]: time="2025-02-13T20:20:09.486664931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:09.564590 kubelet[2579]: I0213 20:20:09.564029 2579 topology_manager.go:215] "Topology Admit Handler" podUID="2e35ffa3-87a6-4203-a3ca-0abebfa17931" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c6j9m" Feb 13 20:20:09.569695 kubelet[2579]: I0213 20:20:09.569088 2579 topology_manager.go:215] "Topology Admit Handler" podUID="fae6f7a6-86bf-475d-928d-3783899e47e1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6hp4f" Feb 13 20:20:09.585249 kubelet[2579]: I0213 20:20:09.584616 2579 topology_manager.go:215] "Topology Admit Handler" podUID="9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6" podNamespace="calico-apiserver" podName="calico-apiserver-59955948f8-pf587" Feb 13 20:20:09.589376 kubelet[2579]: I0213 20:20:09.586247 2579 topology_manager.go:215] "Topology Admit Handler" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" podNamespace="calico-system" podName="calico-kube-controllers-56dc9cc855-j5dzk" Feb 13 20:20:09.589376 kubelet[2579]: I0213 20:20:09.586488 2579 topology_manager.go:215] "Topology Admit Handler" podUID="960b37af-3575-4783-9059-3054ce49019f" podNamespace="calico-apiserver" podName="calico-apiserver-59955948f8-xl9kp" Feb 13 20:20:09.588866 systemd[1]: Created slice kubepods-burstable-pod2e35ffa3_87a6_4203_a3ca_0abebfa17931.slice - libcontainer container kubepods-burstable-pod2e35ffa3_87a6_4203_a3ca_0abebfa17931.slice. Feb 13 20:20:09.604233 systemd[1]: Created slice kubepods-burstable-podfae6f7a6_86bf_475d_928d_3783899e47e1.slice - libcontainer container kubepods-burstable-podfae6f7a6_86bf_475d_928d_3783899e47e1.slice. Feb 13 20:20:09.618892 systemd[1]: Created slice kubepods-besteffort-pod960b37af_3575_4783_9059_3054ce49019f.slice - libcontainer container kubepods-besteffort-pod960b37af_3575_4783_9059_3054ce49019f.slice. Feb 13 20:20:09.638280 systemd[1]: Created slice kubepods-besteffort-pod9e72eb3b_0738_4c0b_a16d_dc0fdc609fa6.slice - libcontainer container kubepods-besteffort-pod9e72eb3b_0738_4c0b_a16d_dc0fdc609fa6.slice. Feb 13 20:20:09.654277 systemd[1]: Created slice kubepods-besteffort-pod7c64b4ea_ed0c_443b_a61d_284300b0cf5b.slice - libcontainer container kubepods-besteffort-pod7c64b4ea_ed0c_443b_a61d_284300b0cf5b.slice. Feb 13 20:20:09.703458 kubelet[2579]: I0213 20:20:09.702522 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e35ffa3-87a6-4203-a3ca-0abebfa17931-config-volume\") pod \"coredns-7db6d8ff4d-c6j9m\" (UID: \"2e35ffa3-87a6-4203-a3ca-0abebfa17931\") " pod="kube-system/coredns-7db6d8ff4d-c6j9m" Feb 13 20:20:09.703458 kubelet[2579]: I0213 20:20:09.702685 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57zp\" (UniqueName: \"kubernetes.io/projected/2e35ffa3-87a6-4203-a3ca-0abebfa17931-kube-api-access-h57zp\") pod \"coredns-7db6d8ff4d-c6j9m\" (UID: \"2e35ffa3-87a6-4203-a3ca-0abebfa17931\") " pod="kube-system/coredns-7db6d8ff4d-c6j9m" Feb 13 20:20:09.703458 kubelet[2579]: I0213 20:20:09.702757 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpbq\" (UniqueName: \"kubernetes.io/projected/960b37af-3575-4783-9059-3054ce49019f-kube-api-access-xwpbq\") pod \"calico-apiserver-59955948f8-xl9kp\" (UID: \"960b37af-3575-4783-9059-3054ce49019f\") " pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" Feb 13 20:20:09.703458 kubelet[2579]: I0213 20:20:09.702834 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6-calico-apiserver-certs\") pod \"calico-apiserver-59955948f8-pf587\" (UID: \"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6\") " pod="calico-apiserver/calico-apiserver-59955948f8-pf587" Feb 13 20:20:09.703458 kubelet[2579]: I0213 20:20:09.702866 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/960b37af-3575-4783-9059-3054ce49019f-calico-apiserver-certs\") pod \"calico-apiserver-59955948f8-xl9kp\" (UID: \"960b37af-3575-4783-9059-3054ce49019f\") " pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" Feb 13 20:20:09.703867 kubelet[2579]: I0213 20:20:09.703235 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae6f7a6-86bf-475d-928d-3783899e47e1-config-volume\") pod \"coredns-7db6d8ff4d-6hp4f\" (UID: \"fae6f7a6-86bf-475d-928d-3783899e47e1\") " pod="kube-system/coredns-7db6d8ff4d-6hp4f" Feb 13 20:20:09.703867 kubelet[2579]: I0213 20:20:09.703340 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddvr\" (UniqueName: \"kubernetes.io/projected/fae6f7a6-86bf-475d-928d-3783899e47e1-kube-api-access-dddvr\") pod \"coredns-7db6d8ff4d-6hp4f\" (UID: \"fae6f7a6-86bf-475d-928d-3783899e47e1\") " pod="kube-system/coredns-7db6d8ff4d-6hp4f" Feb 13 20:20:09.703867 kubelet[2579]: I0213 20:20:09.703384 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-tigera-ca-bundle\") pod \"calico-kube-controllers-56dc9cc855-j5dzk\" (UID: \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\") " pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" Feb 13 20:20:09.703867 kubelet[2579]: I0213 20:20:09.703420 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg627\" (UniqueName: \"kubernetes.io/projected/9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6-kube-api-access-mg627\") pod \"calico-apiserver-59955948f8-pf587\" (UID: \"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6\") " pod="calico-apiserver/calico-apiserver-59955948f8-pf587" Feb 13 20:20:09.703867 kubelet[2579]: I0213 20:20:09.703450 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvgv\" (UniqueName: \"kubernetes.io/projected/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-kube-api-access-kkvgv\") pod \"calico-kube-controllers-56dc9cc855-j5dzk\" (UID: \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\") " pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" Feb 13 20:20:09.908152 kubelet[2579]: E0213 20:20:09.906490 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:09.909599 containerd[1469]: time="2025-02-13T20:20:09.908469629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c6j9m,Uid:2e35ffa3-87a6-4203-a3ca-0abebfa17931,Namespace:kube-system,Attempt:0,}" Feb 13 20:20:09.911502 kubelet[2579]: E0213 20:20:09.911445 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:09.912273 containerd[1469]: time="2025-02-13T20:20:09.912220519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hp4f,Uid:fae6f7a6-86bf-475d-928d-3783899e47e1,Namespace:kube-system,Attempt:0,}" Feb 13 20:20:09.931768 containerd[1469]: time="2025-02-13T20:20:09.931709606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-xl9kp,Uid:960b37af-3575-4783-9059-3054ce49019f,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:20:09.937630 systemd[1]: Created slice kubepods-besteffort-pod22ae9f28_0bd2_4232_81cb_1eee6e72e721.slice - libcontainer container kubepods-besteffort-pod22ae9f28_0bd2_4232_81cb_1eee6e72e721.slice. Feb 13 20:20:09.942788 containerd[1469]: time="2025-02-13T20:20:09.942334779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ws7h9,Uid:22ae9f28-0bd2-4232-81cb-1eee6e72e721,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:09.945251 containerd[1469]: time="2025-02-13T20:20:09.945197855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-pf587,Uid:9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:20:09.961463 containerd[1469]: time="2025-02-13T20:20:09.960808418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56dc9cc855-j5dzk,Uid:7c64b4ea-ed0c-443b-a61d-284300b0cf5b,Namespace:calico-system,Attempt:0,}" Feb 13 20:20:10.188967 kubelet[2579]: E0213 20:20:10.188798 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:10.232009 containerd[1469]: time="2025-02-13T20:20:10.230824106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:20:10.562382 containerd[1469]: time="2025-02-13T20:20:10.562243231Z" level=error msg="Failed to destroy network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.568519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027-shm.mount: Deactivated successfully. Feb 13 20:20:10.579365 containerd[1469]: time="2025-02-13T20:20:10.577564149Z" level=error msg="Failed to destroy network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.583547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68-shm.mount: Deactivated successfully. Feb 13 20:20:10.589767 containerd[1469]: time="2025-02-13T20:20:10.589415190Z" level=error msg="encountered an error cleaning up failed sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.589767 containerd[1469]: time="2025-02-13T20:20:10.589590272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ws7h9,Uid:22ae9f28-0bd2-4232-81cb-1eee6e72e721,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.592540 containerd[1469]: time="2025-02-13T20:20:10.592051863Z" level=error msg="Failed to destroy network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.593939 containerd[1469]: time="2025-02-13T20:20:10.593747273Z" level=error msg="encountered an error cleaning up failed sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.593939 containerd[1469]: time="2025-02-13T20:20:10.593874981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-xl9kp,Uid:960b37af-3575-4783-9059-3054ce49019f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.598531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963-shm.mount: Deactivated successfully. Feb 13 20:20:10.605128 containerd[1469]: time="2025-02-13T20:20:10.605054017Z" level=error msg="encountered an error cleaning up failed sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.605344 containerd[1469]: time="2025-02-13T20:20:10.605149926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-pf587,Uid:9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.606201 kubelet[2579]: E0213 20:20:10.606112 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.606735 kubelet[2579]: E0213 20:20:10.606422 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.606895 kubelet[2579]: E0213 20:20:10.606879 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955948f8-pf587" Feb 13 20:20:10.607079 kubelet[2579]: E0213 20:20:10.607050 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955948f8-pf587" Feb 13 20:20:10.607289 kubelet[2579]: E0213 20:20:10.607246 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59955948f8-pf587_calico-apiserver(9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59955948f8-pf587_calico-apiserver(9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955948f8-pf587" podUID="9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6" Feb 13 20:20:10.607747 kubelet[2579]: E0213 20:20:10.606841 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:20:10.608229 kubelet[2579]: E0213 20:20:10.608193 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ws7h9" Feb 13 20:20:10.609454 kubelet[2579]: E0213 20:20:10.609401 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ws7h9_calico-system(22ae9f28-0bd2-4232-81cb-1eee6e72e721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ws7h9_calico-system(22ae9f28-0bd2-4232-81cb-1eee6e72e721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:10.609856 kubelet[2579]: E0213 20:20:10.606451 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.611728 kubelet[2579]: E0213 20:20:10.610911 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" Feb 13 20:20:10.611728 kubelet[2579]: E0213 20:20:10.611423 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" Feb 13 20:20:10.611728 kubelet[2579]: E0213 20:20:10.611493 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59955948f8-xl9kp_calico-apiserver(960b37af-3575-4783-9059-3054ce49019f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59955948f8-xl9kp_calico-apiserver(960b37af-3575-4783-9059-3054ce49019f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" podUID="960b37af-3575-4783-9059-3054ce49019f" Feb 13 20:20:10.624584 containerd[1469]: time="2025-02-13T20:20:10.624286860Z" level=error msg="Failed to destroy network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.631517 containerd[1469]: time="2025-02-13T20:20:10.628769983Z" level=error msg="encountered an error cleaning up failed sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.631517 containerd[1469]: time="2025-02-13T20:20:10.628870202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56dc9cc855-j5dzk,Uid:7c64b4ea-ed0c-443b-a61d-284300b0cf5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.631206 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d-shm.mount: Deactivated successfully. Feb 13 20:20:10.631845 kubelet[2579]: E0213 20:20:10.630664 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.631845 kubelet[2579]: E0213 20:20:10.630742 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" Feb 13 20:20:10.631845 kubelet[2579]: E0213 20:20:10.630771 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" Feb 13 20:20:10.632077 kubelet[2579]: E0213 20:20:10.630857 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56dc9cc855-j5dzk_calico-system(7c64b4ea-ed0c-443b-a61d-284300b0cf5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56dc9cc855-j5dzk_calico-system(7c64b4ea-ed0c-443b-a61d-284300b0cf5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" Feb 13 20:20:10.644454 containerd[1469]: time="2025-02-13T20:20:10.644056529Z" level=error msg="Failed to destroy network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.645339 containerd[1469]: time="2025-02-13T20:20:10.645252470Z" level=error msg="encountered an error cleaning up failed sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.645504 containerd[1469]: time="2025-02-13T20:20:10.645357932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c6j9m,Uid:2e35ffa3-87a6-4203-a3ca-0abebfa17931,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.646365 kubelet[2579]: E0213 20:20:10.645878 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.646365 kubelet[2579]: E0213 20:20:10.645969 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c6j9m" Feb 13 20:20:10.646365 kubelet[2579]: E0213 20:20:10.646004 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c6j9m" Feb 13 20:20:10.648038 kubelet[2579]: E0213 20:20:10.646063 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-c6j9m_kube-system(2e35ffa3-87a6-4203-a3ca-0abebfa17931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-c6j9m_kube-system(2e35ffa3-87a6-4203-a3ca-0abebfa17931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c6j9m" podUID="2e35ffa3-87a6-4203-a3ca-0abebfa17931" Feb 13 20:20:10.657699 containerd[1469]: time="2025-02-13T20:20:10.657632427Z" level=error msg="Failed to destroy network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.658074 containerd[1469]: time="2025-02-13T20:20:10.658033406Z" level=error msg="encountered an error cleaning up failed sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.658172 containerd[1469]: time="2025-02-13T20:20:10.658113466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hp4f,Uid:fae6f7a6-86bf-475d-928d-3783899e47e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.659525 kubelet[2579]: E0213 20:20:10.659465 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:10.659690 kubelet[2579]: E0213 20:20:10.659537 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6hp4f" Feb 13 20:20:10.659690 kubelet[2579]: E0213 20:20:10.659565 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6hp4f" Feb 13 20:20:10.659690 kubelet[2579]: E0213 20:20:10.659620 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6hp4f_kube-system(fae6f7a6-86bf-475d-928d-3783899e47e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6hp4f_kube-system(fae6f7a6-86bf-475d-928d-3783899e47e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6hp4f" podUID="fae6f7a6-86bf-475d-928d-3783899e47e1" Feb 13 20:20:11.213336 kubelet[2579]: I0213 20:20:11.213268 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:11.215163 kubelet[2579]: I0213 20:20:11.214430 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:11.222410 containerd[1469]: time="2025-02-13T20:20:11.221081839Z" level=info msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" Feb 13 20:20:11.222646 containerd[1469]: time="2025-02-13T20:20:11.222610022Z" level=info msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" Feb 13 20:20:11.223334 containerd[1469]: time="2025-02-13T20:20:11.223240337Z" level=info msg="Ensure that sandbox f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68 in task-service has been cleanup successfully" Feb 13 20:20:11.223953 containerd[1469]: time="2025-02-13T20:20:11.223278442Z" level=info msg="Ensure that sandbox 6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244 in task-service has been cleanup successfully" Feb 13 20:20:11.238013 kubelet[2579]: I0213 20:20:11.237975 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:11.242184 containerd[1469]: time="2025-02-13T20:20:11.242142202Z" level=info msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" Feb 13 20:20:11.243651 kubelet[2579]: I0213 20:20:11.243530 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:11.244732 containerd[1469]: time="2025-02-13T20:20:11.244088592Z" level=info msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" Feb 13 20:20:11.244732 containerd[1469]: time="2025-02-13T20:20:11.244282376Z" level=info msg="Ensure that sandbox 0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d in task-service has been cleanup successfully" Feb 13 20:20:11.245094 containerd[1469]: time="2025-02-13T20:20:11.245054674Z" level=info msg="Ensure that sandbox 44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963 in task-service has been cleanup successfully" Feb 13 20:20:11.252384 kubelet[2579]: I0213 20:20:11.252299 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:11.254153 containerd[1469]: time="2025-02-13T20:20:11.254098355Z" level=info msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" Feb 13 20:20:11.256009 containerd[1469]: time="2025-02-13T20:20:11.255944944Z" level=info msg="Ensure that sandbox 41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d in task-service has been cleanup successfully" Feb 13 20:20:11.262717 kubelet[2579]: I0213 20:20:11.262628 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:11.264967 containerd[1469]: time="2025-02-13T20:20:11.264687282Z" level=info msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" Feb 13 20:20:11.266015 containerd[1469]: time="2025-02-13T20:20:11.265908381Z" level=info msg="Ensure that sandbox f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027 in task-service has been cleanup successfully" Feb 13 20:20:11.349005 containerd[1469]: time="2025-02-13T20:20:11.348886846Z" level=error msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" failed" error="failed to destroy network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.349642 kubelet[2579]: E0213 20:20:11.349359 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:11.349642 kubelet[2579]: E0213 20:20:11.349418 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d"} Feb 13 20:20:11.349642 kubelet[2579]: E0213 20:20:11.349482 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fae6f7a6-86bf-475d-928d-3783899e47e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.349642 kubelet[2579]: E0213 20:20:11.349511 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fae6f7a6-86bf-475d-928d-3783899e47e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6hp4f" podUID="fae6f7a6-86bf-475d-928d-3783899e47e1" Feb 13 20:20:11.362323 containerd[1469]: time="2025-02-13T20:20:11.361954951Z" level=error msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" failed" error="failed to destroy network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.364984 kubelet[2579]: E0213 20:20:11.364771 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:11.364984 kubelet[2579]: E0213 20:20:11.364842 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963"} Feb 13 20:20:11.364984 kubelet[2579]: E0213 20:20:11.364896 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"960b37af-3575-4783-9059-3054ce49019f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.364984 kubelet[2579]: E0213 20:20:11.364930 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"960b37af-3575-4783-9059-3054ce49019f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" podUID="960b37af-3575-4783-9059-3054ce49019f" Feb 13 20:20:11.375327 containerd[1469]: time="2025-02-13T20:20:11.374642847Z" level=error msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" failed" error="failed to destroy network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.375843 kubelet[2579]: E0213 20:20:11.375653 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:11.375843 kubelet[2579]: E0213 20:20:11.375708 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244"} Feb 13 20:20:11.375843 kubelet[2579]: E0213 20:20:11.375784 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e35ffa3-87a6-4203-a3ca-0abebfa17931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.375843 kubelet[2579]: E0213 20:20:11.375811 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e35ffa3-87a6-4203-a3ca-0abebfa17931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c6j9m" podUID="2e35ffa3-87a6-4203-a3ca-0abebfa17931" Feb 13 20:20:11.401134 containerd[1469]: time="2025-02-13T20:20:11.401065010Z" level=error msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" failed" error="failed to destroy network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.401626 kubelet[2579]: E0213 20:20:11.401570 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:11.401769 kubelet[2579]: E0213 20:20:11.401633 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68"} Feb 13 20:20:11.401769 kubelet[2579]: E0213 20:20:11.401675 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.401769 kubelet[2579]: E0213 20:20:11.401699 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22ae9f28-0bd2-4232-81cb-1eee6e72e721\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ws7h9" podUID="22ae9f28-0bd2-4232-81cb-1eee6e72e721" Feb 13 20:20:11.402402 containerd[1469]: time="2025-02-13T20:20:11.402283844Z" level=error msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" failed" error="failed to destroy network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.403259 kubelet[2579]: E0213 20:20:11.402971 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:11.403259 kubelet[2579]: E0213 20:20:11.403160 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027"} Feb 13 20:20:11.403259 kubelet[2579]: E0213 20:20:11.403197 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.403259 kubelet[2579]: E0213 20:20:11.403225 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955948f8-pf587" podUID="9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6" Feb 13 20:20:11.404032 containerd[1469]: time="2025-02-13T20:20:11.403994699Z" level=error msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" failed" error="failed to destroy network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:20:11.404598 kubelet[2579]: E0213 20:20:11.404271 2579 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:11.404598 kubelet[2579]: E0213 20:20:11.404500 2579 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d"} Feb 13 20:20:11.404598 kubelet[2579]: E0213 20:20:11.404544 2579 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:20:11.404598 kubelet[2579]: E0213 20:20:11.404567 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" Feb 13 20:20:11.472193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d-shm.mount: Deactivated successfully. Feb 13 20:20:11.472851 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244-shm.mount: Deactivated successfully. Feb 13 20:20:15.007409 kubelet[2579]: I0213 20:20:15.006594 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:20:15.010356 kubelet[2579]: E0213 20:20:15.009154 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:15.283527 kubelet[2579]: E0213 20:20:15.283410 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:18.807389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718888793.mount: Deactivated successfully. Feb 13 20:20:19.112764 containerd[1469]: time="2025-02-13T20:20:19.077555455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:20:19.116069 containerd[1469]: time="2025-02-13T20:20:19.115991649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:19.156125 containerd[1469]: time="2025-02-13T20:20:19.156061402Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:19.165917 containerd[1469]: time="2025-02-13T20:20:19.165186170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:19.171543 containerd[1469]: time="2025-02-13T20:20:19.171487376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.934864763s" Feb 13 20:20:19.171755 containerd[1469]: time="2025-02-13T20:20:19.171731424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:20:19.211727 containerd[1469]: time="2025-02-13T20:20:19.211670564Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:20:19.304291 containerd[1469]: time="2025-02-13T20:20:19.304224577Z" level=info msg="CreateContainer within sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\"" Feb 13 20:20:19.305849 containerd[1469]: time="2025-02-13T20:20:19.305506465Z" level=info msg="StartContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\"" Feb 13 20:20:19.427503 systemd[1]: Started cri-containerd-807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d.scope - libcontainer container 807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d. Feb 13 20:20:19.518708 containerd[1469]: time="2025-02-13T20:20:19.517050444Z" level=info msg="StartContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" returns successfully" Feb 13 20:20:19.619412 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:20:19.621850 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:20:20.300489 kubelet[2579]: E0213 20:20:20.299946 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:20.367332 kubelet[2579]: I0213 20:20:20.364541 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rj7zk" podStartSLOduration=2.5617551069999998 podStartE2EDuration="25.354385022s" podCreationTimestamp="2025-02-13 20:19:55 +0000 UTC" firstStartedPulling="2025-02-13 20:19:56.380093225 +0000 UTC m=+22.656686087" lastFinishedPulling="2025-02-13 20:20:19.172723121 +0000 UTC m=+45.449316002" observedRunningTime="2025-02-13 20:20:20.353659928 +0000 UTC m=+46.630252817" watchObservedRunningTime="2025-02-13 20:20:20.354385022 +0000 UTC m=+46.630977904" Feb 13 20:20:21.303190 kubelet[2579]: E0213 20:20:21.303077 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:21.765347 kernel: bpftool[3881]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:20:22.127826 systemd-networkd[1365]: vxlan.calico: Link UP Feb 13 20:20:22.127838 systemd-networkd[1365]: vxlan.calico: Gained carrier Feb 13 20:20:22.304919 kubelet[2579]: E0213 20:20:22.304863 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:22.949663 containerd[1469]: time="2025-02-13T20:20:22.938332667Z" level=info msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.115 [INFO][3987] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.117 [INFO][3987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" iface="eth0" netns="/var/run/netns/cni-60a954f5-5547-fd05-e27b-99a86c58bdab" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.119 [INFO][3987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" iface="eth0" netns="/var/run/netns/cni-60a954f5-5547-fd05-e27b-99a86c58bdab" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.125 [INFO][3987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" iface="eth0" netns="/var/run/netns/cni-60a954f5-5547-fd05-e27b-99a86c58bdab" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.126 [INFO][3987] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.126 [INFO][3987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.377 [INFO][3994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.381 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.381 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.401 [WARNING][3994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.402 [INFO][3994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.409 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:23.439641 containerd[1469]: 2025-02-13 20:20:23.414 [INFO][3987] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:23.441046 containerd[1469]: time="2025-02-13T20:20:23.439861929Z" level=info msg="TearDown network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" successfully" Feb 13 20:20:23.441046 containerd[1469]: time="2025-02-13T20:20:23.439933141Z" level=info msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" returns successfully" Feb 13 20:20:23.444647 systemd[1]: run-netns-cni\x2d60a954f5\x2d5547\x2dfd05\x2de27b\x2d99a86c58bdab.mount: Deactivated successfully. Feb 13 20:20:23.460202 containerd[1469]: time="2025-02-13T20:20:23.460139879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ws7h9,Uid:22ae9f28-0bd2-4232-81cb-1eee6e72e721,Namespace:calico-system,Attempt:1,}" Feb 13 20:20:23.756272 systemd-networkd[1365]: cali5c7a361437a: Link UP Feb 13 20:20:23.760782 systemd-networkd[1365]: cali5c7a361437a: Gained carrier Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.592 [INFO][4004] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0 csi-node-driver- calico-system 22ae9f28-0bd2-4232-81cb-1eee6e72e721 828 0 2025-02-13 20:19:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 csi-node-driver-ws7h9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c7a361437a [] []}} ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.595 [INFO][4004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.650 [INFO][4014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" HandleID="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.667 [INFO][4014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" HandleID="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"csi-node-driver-ws7h9", "timestamp":"2025-02-13 20:20:23.650420528 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.668 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.668 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.668 [INFO][4014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.672 [INFO][4014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.684 [INFO][4014] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.693 [INFO][4014] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.702 [INFO][4014] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.708 [INFO][4014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.708 [INFO][4014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.712 [INFO][4014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.726 [INFO][4014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.740 [INFO][4014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.1/26] block=192.168.26.0/26 handle="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.740 [INFO][4014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.1/26] handle="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.740 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:23.800842 containerd[1469]: 2025-02-13 20:20:23.741 [INFO][4014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.1/26] IPv6=[] ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" HandleID="k8s-pod-network.5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.747 [INFO][4004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22ae9f28-0bd2-4232-81cb-1eee6e72e721", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"csi-node-driver-ws7h9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7a361437a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.747 [INFO][4004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.1/32] ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.747 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c7a361437a ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.765 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.767 [INFO][4004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22ae9f28-0bd2-4232-81cb-1eee6e72e721", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c", Pod:"csi-node-driver-ws7h9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7a361437a", MAC:"5a:06:e1:dd:4b:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:23.803664 containerd[1469]: 2025-02-13 20:20:23.793 [INFO][4004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c" Namespace="calico-system" Pod="csi-node-driver-ws7h9" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:23.868160 containerd[1469]: time="2025-02-13T20:20:23.867594293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:23.868160 containerd[1469]: time="2025-02-13T20:20:23.867756292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:23.868160 containerd[1469]: time="2025-02-13T20:20:23.867792865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:23.868160 containerd[1469]: time="2025-02-13T20:20:23.868032863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:23.912984 systemd[1]: Started cri-containerd-5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c.scope - libcontainer container 5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c. Feb 13 20:20:23.925970 containerd[1469]: time="2025-02-13T20:20:23.925124508Z" level=info msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" Feb 13 20:20:23.925970 containerd[1469]: time="2025-02-13T20:20:23.925579860Z" level=info msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" Feb 13 20:20:24.046013 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Feb 13 20:20:24.088615 containerd[1469]: time="2025-02-13T20:20:24.088561403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ws7h9,Uid:22ae9f28-0bd2-4232-81cb-1eee6e72e721,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c\"" Feb 13 20:20:24.142516 containerd[1469]: time="2025-02-13T20:20:24.141752106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.155 [INFO][4088] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.156 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" iface="eth0" netns="/var/run/netns/cni-fcb50c0f-07cf-d0ed-23a8-10e5c9e59c31" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.158 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" iface="eth0" netns="/var/run/netns/cni-fcb50c0f-07cf-d0ed-23a8-10e5c9e59c31" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.160 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" iface="eth0" netns="/var/run/netns/cni-fcb50c0f-07cf-d0ed-23a8-10e5c9e59c31" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.160 [INFO][4088] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.160 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.208 [INFO][4113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.208 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.208 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.234 [WARNING][4113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.235 [INFO][4113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.240 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:24.258405 containerd[1469]: 2025-02-13 20:20:24.248 [INFO][4088] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:24.258405 containerd[1469]: time="2025-02-13T20:20:24.256683383Z" level=info msg="TearDown network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" successfully" Feb 13 20:20:24.258405 containerd[1469]: time="2025-02-13T20:20:24.256714006Z" level=info msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" returns successfully" Feb 13 20:20:24.258405 containerd[1469]: time="2025-02-13T20:20:24.257925653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-xl9kp,Uid:960b37af-3575-4783-9059-3054ce49019f,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.233 [INFO][4101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.233 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" iface="eth0" netns="/var/run/netns/cni-64195366-985b-76bb-4956-4ca4c79b28f6" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.235 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" iface="eth0" netns="/var/run/netns/cni-64195366-985b-76bb-4956-4ca4c79b28f6" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.237 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" iface="eth0" netns="/var/run/netns/cni-64195366-985b-76bb-4956-4ca4c79b28f6" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.237 [INFO][4101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.237 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.311 [INFO][4121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.313 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.314 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.334 [WARNING][4121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.334 [INFO][4121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.337 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:24.350320 containerd[1469]: 2025-02-13 20:20:24.343 [INFO][4101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:24.352265 containerd[1469]: time="2025-02-13T20:20:24.351684471Z" level=info msg="TearDown network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" successfully" Feb 13 20:20:24.352265 containerd[1469]: time="2025-02-13T20:20:24.351727380Z" level=info msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" returns successfully" Feb 13 20:20:24.354251 containerd[1469]: time="2025-02-13T20:20:24.353681140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-pf587,Uid:9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:20:24.459859 systemd[1]: run-netns-cni\x2d64195366\x2d985b\x2d76bb\x2d4956\x2d4ca4c79b28f6.mount: Deactivated successfully. Feb 13 20:20:24.462561 systemd[1]: run-netns-cni\x2dfcb50c0f\x2d07cf\x2dd0ed\x2d23a8\x2d10e5c9e59c31.mount: Deactivated successfully. Feb 13 20:20:24.635957 systemd-networkd[1365]: cali4d2c656b53d: Link UP Feb 13 20:20:24.638756 systemd-networkd[1365]: cali4d2c656b53d: Gained carrier Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.429 [INFO][4127] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0 calico-apiserver-59955948f8- calico-apiserver 960b37af-3575-4783-9059-3054ce49019f 835 0 2025-02-13 20:19:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59955948f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 calico-apiserver-59955948f8-xl9kp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d2c656b53d [] []}} ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.429 [INFO][4127] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.509 [INFO][4149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" HandleID="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.528 [INFO][4149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" HandleID="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"calico-apiserver-59955948f8-xl9kp", "timestamp":"2025-02-13 20:20:24.509175352 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.529 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.529 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.529 [INFO][4149] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.537 [INFO][4149] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.552 [INFO][4149] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.569 [INFO][4149] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.573 [INFO][4149] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.579 [INFO][4149] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.580 [INFO][4149] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.584 [INFO][4149] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984 Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.599 [INFO][4149] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.614 [INFO][4149] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.2/26] block=192.168.26.0/26 handle="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.614 [INFO][4149] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.2/26] handle="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.614 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:24.686584 containerd[1469]: 2025-02-13 20:20:24.614 [INFO][4149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.2/26] IPv6=[] ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" HandleID="k8s-pod-network.3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.620 [INFO][4127] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"960b37af-3575-4783-9059-3054ce49019f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"calico-apiserver-59955948f8-xl9kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d2c656b53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.620 [INFO][4127] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.2/32] ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.620 [INFO][4127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d2c656b53d ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.636 [INFO][4127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.640 [INFO][4127] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"960b37af-3575-4783-9059-3054ce49019f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984", Pod:"calico-apiserver-59955948f8-xl9kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d2c656b53d", MAC:"be:3f:19:81:9a:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:24.692637 containerd[1469]: 2025-02-13 20:20:24.672 [INFO][4127] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-xl9kp" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:24.758747 systemd-networkd[1365]: cali395441415da: Link UP Feb 13 20:20:24.760463 systemd-networkd[1365]: cali395441415da: Gained carrier Feb 13 20:20:24.809205 containerd[1469]: time="2025-02-13T20:20:24.807387959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:24.809205 containerd[1469]: time="2025-02-13T20:20:24.807506054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:24.809205 containerd[1469]: time="2025-02-13T20:20:24.807521573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:24.809205 containerd[1469]: time="2025-02-13T20:20:24.807645776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.492 [INFO][4138] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0 calico-apiserver-59955948f8- calico-apiserver 9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6 836 0 2025-02-13 20:19:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59955948f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 calico-apiserver-59955948f8-pf587 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali395441415da [] []}} ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.492 [INFO][4138] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.588 [INFO][4160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" HandleID="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.608 [INFO][4160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" HandleID="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f3e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"calico-apiserver-59955948f8-pf587", "timestamp":"2025-02-13 20:20:24.588595398 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.608 [INFO][4160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.616 [INFO][4160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.616 [INFO][4160] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.623 [INFO][4160] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.639 [INFO][4160] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.660 [INFO][4160] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.665 [INFO][4160] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.682 [INFO][4160] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.682 [INFO][4160] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.691 [INFO][4160] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.715 [INFO][4160] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.743 [INFO][4160] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.3/26] block=192.168.26.0/26 handle="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.743 [INFO][4160] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.3/26] handle="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.744 [INFO][4160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:24.833867 containerd[1469]: 2025-02-13 20:20:24.744 [INFO][4160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.3/26] IPv6=[] ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" HandleID="k8s-pod-network.7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.748 [INFO][4138] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"calico-apiserver-59955948f8-pf587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395441415da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.748 [INFO][4138] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.3/32] ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.748 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali395441415da ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.760 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.770 [INFO][4138] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead", Pod:"calico-apiserver-59955948f8-pf587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395441415da", MAC:"96:57:3d:46:6c:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:24.834827 containerd[1469]: 2025-02-13 20:20:24.811 [INFO][4138] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead" Namespace="calico-apiserver" Pod="calico-apiserver-59955948f8-pf587" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:24.875895 systemd[1]: Started cri-containerd-3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984.scope - libcontainer container 3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984. Feb 13 20:20:24.908417 containerd[1469]: time="2025-02-13T20:20:24.907847805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:24.908417 containerd[1469]: time="2025-02-13T20:20:24.907956284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:24.908417 containerd[1469]: time="2025-02-13T20:20:24.907978449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:24.908417 containerd[1469]: time="2025-02-13T20:20:24.908117546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:24.929196 containerd[1469]: time="2025-02-13T20:20:24.925547595Z" level=info msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" Feb 13 20:20:24.986705 systemd[1]: Started cri-containerd-7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead.scope - libcontainer container 7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead. Feb 13 20:20:25.035560 containerd[1469]: time="2025-02-13T20:20:25.034777741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-xl9kp,Uid:960b37af-3575-4783-9059-3054ce49019f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984\"" Feb 13 20:20:25.142064 containerd[1469]: time="2025-02-13T20:20:25.141999555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955948f8-pf587,Uid:9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead\"" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.105 [INFO][4267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.105 [INFO][4267] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" iface="eth0" netns="/var/run/netns/cni-51b88ebe-12b3-8004-7b4d-5be6c4659ee1" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.106 [INFO][4267] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" iface="eth0" netns="/var/run/netns/cni-51b88ebe-12b3-8004-7b4d-5be6c4659ee1" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.109 [INFO][4267] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" iface="eth0" netns="/var/run/netns/cni-51b88ebe-12b3-8004-7b4d-5be6c4659ee1" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.109 [INFO][4267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.109 [INFO][4267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.157 [INFO][4288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.157 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.157 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.174 [WARNING][4288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.174 [INFO][4288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.180 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:25.186264 containerd[1469]: 2025-02-13 20:20:25.183 [INFO][4267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:25.187606 containerd[1469]: time="2025-02-13T20:20:25.186494807Z" level=info msg="TearDown network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" successfully" Feb 13 20:20:25.187606 containerd[1469]: time="2025-02-13T20:20:25.186533276Z" level=info msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" returns successfully" Feb 13 20:20:25.187875 kubelet[2579]: E0213 20:20:25.187821 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:25.189269 containerd[1469]: time="2025-02-13T20:20:25.189200949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hp4f,Uid:fae6f7a6-86bf-475d-928d-3783899e47e1,Namespace:kube-system,Attempt:1,}" Feb 13 20:20:25.453995 systemd[1]: run-netns-cni\x2d51b88ebe\x2d12b3\x2d8004\x2d7b4d\x2d5be6c4659ee1.mount: Deactivated successfully. Feb 13 20:20:25.476847 systemd-networkd[1365]: calib34c8447187: Link UP Feb 13 20:20:25.477284 systemd-networkd[1365]: calib34c8447187: Gained carrier Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.293 [INFO][4301] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0 coredns-7db6d8ff4d- kube-system fae6f7a6-86bf-475d-928d-3783899e47e1 849 0 2025-02-13 20:19:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 coredns-7db6d8ff4d-6hp4f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib34c8447187 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.293 [INFO][4301] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.364 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" HandleID="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.381 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" HandleID="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fcd90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"coredns-7db6d8ff4d-6hp4f", "timestamp":"2025-02-13 20:20:25.364575732 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.381 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.381 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.381 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.385 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.398 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.410 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.420 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.426 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.426 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.429 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66 Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.451 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.463 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.4/26] block=192.168.26.0/26 handle="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.463 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.4/26] handle="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.463 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:25.511010 containerd[1469]: 2025-02-13 20:20:25.463 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.4/26] IPv6=[] ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" HandleID="k8s-pod-network.ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.468 [INFO][4301] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fae6f7a6-86bf-475d-928d-3783899e47e1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"coredns-7db6d8ff4d-6hp4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib34c8447187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.468 [INFO][4301] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.4/32] ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.469 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib34c8447187 ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.479 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.480 [INFO][4301] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fae6f7a6-86bf-475d-928d-3783899e47e1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66", Pod:"coredns-7db6d8ff4d-6hp4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib34c8447187", MAC:"46:08:87:23:ff:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:25.511874 containerd[1469]: 2025-02-13 20:20:25.505 [INFO][4301] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6hp4f" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:25.518265 systemd-networkd[1365]: cali5c7a361437a: Gained IPv6LL Feb 13 20:20:25.566109 containerd[1469]: time="2025-02-13T20:20:25.565640920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:25.566109 containerd[1469]: time="2025-02-13T20:20:25.565740527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:25.566109 containerd[1469]: time="2025-02-13T20:20:25.565767008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:25.566109 containerd[1469]: time="2025-02-13T20:20:25.565912523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:25.606683 systemd[1]: Started cri-containerd-ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66.scope - libcontainer container ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66. Feb 13 20:20:25.699296 containerd[1469]: time="2025-02-13T20:20:25.699208978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hp4f,Uid:fae6f7a6-86bf-475d-928d-3783899e47e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66\"" Feb 13 20:20:25.701012 kubelet[2579]: E0213 20:20:25.700977 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:25.706008 containerd[1469]: time="2025-02-13T20:20:25.705665468Z" level=info msg="CreateContainer within sandbox \"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:20:25.748897 containerd[1469]: time="2025-02-13T20:20:25.748713970Z" level=info msg="CreateContainer within sandbox \"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bebfc178e7a25727d25a3eb12e37f3e63d80848edda3888bb50122098cab1be\"" Feb 13 20:20:25.751933 containerd[1469]: time="2025-02-13T20:20:25.751846330Z" level=info msg="StartContainer for \"2bebfc178e7a25727d25a3eb12e37f3e63d80848edda3888bb50122098cab1be\"" Feb 13 20:20:25.796716 systemd[1]: Started cri-containerd-2bebfc178e7a25727d25a3eb12e37f3e63d80848edda3888bb50122098cab1be.scope - libcontainer container 2bebfc178e7a25727d25a3eb12e37f3e63d80848edda3888bb50122098cab1be. Feb 13 20:20:25.837592 systemd-networkd[1365]: cali4d2c656b53d: Gained IPv6LL Feb 13 20:20:25.845429 containerd[1469]: time="2025-02-13T20:20:25.845284460Z" level=info msg="StartContainer for \"2bebfc178e7a25727d25a3eb12e37f3e63d80848edda3888bb50122098cab1be\" returns successfully" Feb 13 20:20:25.928850 containerd[1469]: time="2025-02-13T20:20:25.927813161Z" level=info msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" Feb 13 20:20:25.932276 containerd[1469]: time="2025-02-13T20:20:25.931827363Z" level=info msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.138 [INFO][4434] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.142 [INFO][4434] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" iface="eth0" netns="/var/run/netns/cni-0e5201f5-dff4-c6c7-7e66-d7abb3f858c7" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.142 [INFO][4434] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" iface="eth0" netns="/var/run/netns/cni-0e5201f5-dff4-c6c7-7e66-d7abb3f858c7" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.144 [INFO][4434] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" iface="eth0" netns="/var/run/netns/cni-0e5201f5-dff4-c6c7-7e66-d7abb3f858c7" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.144 [INFO][4434] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.144 [INFO][4434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.263 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.264 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.264 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.300 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.301 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.318 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:26.335591 containerd[1469]: 2025-02-13 20:20:26.328 [INFO][4434] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:26.338421 containerd[1469]: time="2025-02-13T20:20:26.337686716Z" level=info msg="TearDown network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" successfully" Feb 13 20:20:26.338421 containerd[1469]: time="2025-02-13T20:20:26.337726819Z" level=info msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" returns successfully" Feb 13 20:20:26.340660 containerd[1469]: time="2025-02-13T20:20:26.339603066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56dc9cc855-j5dzk,Uid:7c64b4ea-ed0c-443b-a61d-284300b0cf5b,Namespace:calico-system,Attempt:1,}" Feb 13 20:20:26.354220 systemd-networkd[1365]: cali395441415da: Gained IPv6LL Feb 13 20:20:26.386278 kubelet[2579]: E0213 20:20:26.385983 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.117 [INFO][4432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.118 [INFO][4432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" iface="eth0" netns="/var/run/netns/cni-b8f41817-e87b-c9cf-0d37-beacd1e083da" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.118 [INFO][4432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" iface="eth0" netns="/var/run/netns/cni-b8f41817-e87b-c9cf-0d37-beacd1e083da" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.120 [INFO][4432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" iface="eth0" netns="/var/run/netns/cni-b8f41817-e87b-c9cf-0d37-beacd1e083da" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.120 [INFO][4432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.120 [INFO][4432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.294 [INFO][4446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.304 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.318 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.346 [WARNING][4446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.346 [INFO][4446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.357 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:26.391514 containerd[1469]: 2025-02-13 20:20:26.382 [INFO][4432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:26.393908 containerd[1469]: time="2025-02-13T20:20:26.392989385Z" level=info msg="TearDown network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" successfully" Feb 13 20:20:26.393908 containerd[1469]: time="2025-02-13T20:20:26.393035810Z" level=info msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" returns successfully" Feb 13 20:20:26.394090 kubelet[2579]: E0213 20:20:26.393599 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:26.396487 containerd[1469]: time="2025-02-13T20:20:26.394152421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c6j9m,Uid:2e35ffa3-87a6-4203-a3ca-0abebfa17931,Namespace:kube-system,Attempt:1,}" Feb 13 20:20:26.452505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515429877.mount: Deactivated successfully. Feb 13 20:20:26.452664 systemd[1]: run-netns-cni\x2d0e5201f5\x2ddff4\x2dc6c7\x2d7e66\x2dd7abb3f858c7.mount: Deactivated successfully. Feb 13 20:20:26.452730 systemd[1]: run-netns-cni\x2db8f41817\x2de87b\x2dc9cf\x2d0d37\x2dbeacd1e083da.mount: Deactivated successfully. Feb 13 20:20:26.798624 containerd[1469]: time="2025-02-13T20:20:26.798227106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:26.804425 containerd[1469]: time="2025-02-13T20:20:26.804352025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:20:26.805525 containerd[1469]: time="2025-02-13T20:20:26.805282226Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:26.809988 containerd[1469]: time="2025-02-13T20:20:26.809937133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:26.810909 containerd[1469]: time="2025-02-13T20:20:26.810805246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.668996198s" Feb 13 20:20:26.810909 containerd[1469]: time="2025-02-13T20:20:26.810854049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:20:26.813690 containerd[1469]: time="2025-02-13T20:20:26.813653439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:20:26.828299 containerd[1469]: time="2025-02-13T20:20:26.828246272Z" level=info msg="CreateContainer within sandbox \"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:20:26.861688 systemd-networkd[1365]: cali34922017e51: Link UP Feb 13 20:20:26.862054 systemd-networkd[1365]: cali34922017e51: Gained carrier Feb 13 20:20:26.862216 systemd-networkd[1365]: calib34c8447187: Gained IPv6LL Feb 13 20:20:26.889830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982939603.mount: Deactivated successfully. Feb 13 20:20:26.897607 containerd[1469]: time="2025-02-13T20:20:26.896902953Z" level=info msg="CreateContainer within sandbox \"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"489d4d6d5043027a079e50822be7ba5c16300812f153500775ef24a53f0d2970\"" Feb 13 20:20:26.899380 containerd[1469]: time="2025-02-13T20:20:26.898761725Z" level=info msg="StartContainer for \"489d4d6d5043027a079e50822be7ba5c16300812f153500775ef24a53f0d2970\"" Feb 13 20:20:26.919656 kubelet[2579]: I0213 20:20:26.919371 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6hp4f" podStartSLOduration=38.919339462 podStartE2EDuration="38.919339462s" podCreationTimestamp="2025-02-13 20:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:20:26.439343949 +0000 UTC m=+52.715936833" watchObservedRunningTime="2025-02-13 20:20:26.919339462 +0000 UTC m=+53.195932345" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.655 [INFO][4476] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0 coredns-7db6d8ff4d- kube-system 2e35ffa3-87a6-4203-a3ca-0abebfa17931 861 0 2025-02-13 20:19:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 coredns-7db6d8ff4d-c6j9m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34922017e51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.656 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.721 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" HandleID="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.739 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" HandleID="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c4b60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"coredns-7db6d8ff4d-c6j9m", "timestamp":"2025-02-13 20:20:26.721762697 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.742 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.743 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.744 [INFO][4496] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.752 [INFO][4496] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.765 [INFO][4496] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.783 [INFO][4496] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.793 [INFO][4496] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.801 [INFO][4496] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.801 [INFO][4496] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.806 [INFO][4496] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624 Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.818 [INFO][4496] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.845 [INFO][4496] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.5/26] block=192.168.26.0/26 handle="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.845 [INFO][4496] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.5/26] handle="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.846 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:26.932709 containerd[1469]: 2025-02-13 20:20:26.846 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.5/26] IPv6=[] ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" HandleID="k8s-pod-network.db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.852 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e35ffa3-87a6-4203-a3ca-0abebfa17931", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"coredns-7db6d8ff4d-c6j9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34922017e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.853 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.5/32] ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.853 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34922017e51 ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.861 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.864 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e35ffa3-87a6-4203-a3ca-0abebfa17931", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624", Pod:"coredns-7db6d8ff4d-c6j9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34922017e51", MAC:"26:cb:8f:e8:34:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:26.933719 containerd[1469]: 2025-02-13 20:20:26.923 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c6j9m" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:27.040936 systemd-networkd[1365]: cali893ed243fee: Link UP Feb 13 20:20:27.044385 containerd[1469]: time="2025-02-13T20:20:27.042506551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:27.044385 containerd[1469]: time="2025-02-13T20:20:27.042615292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:27.044385 containerd[1469]: time="2025-02-13T20:20:27.042661226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:27.044385 containerd[1469]: time="2025-02-13T20:20:27.042818004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:27.052369 systemd-networkd[1365]: cali893ed243fee: Gained carrier Feb 13 20:20:27.097594 systemd[1]: Started cri-containerd-489d4d6d5043027a079e50822be7ba5c16300812f153500775ef24a53f0d2970.scope - libcontainer container 489d4d6d5043027a079e50822be7ba5c16300812f153500775ef24a53f0d2970. Feb 13 20:20:27.111364 systemd[1]: Started cri-containerd-db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624.scope - libcontainer container db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624. Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.602 [INFO][4466] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0 calico-kube-controllers-56dc9cc855- calico-system 7c64b4ea-ed0c-443b-a61d-284300b0cf5b 862 0 2025-02-13 20:19:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56dc9cc855 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-6-670b8c47e7 calico-kube-controllers-56dc9cc855-j5dzk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali893ed243fee [] []}} ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.604 [INFO][4466] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.783 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.801 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-6-670b8c47e7", "pod":"calico-kube-controllers-56dc9cc855-j5dzk", "timestamp":"2025-02-13 20:20:26.783139297 +0000 UTC"}, Hostname:"ci-4081.3.1-6-670b8c47e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.801 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.846 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.846 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-6-670b8c47e7' Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.853 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.874 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.917 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.932 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.941 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.941 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.954 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:26.979 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:27.014 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.26.6/26] block=192.168.26.0/26 handle="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:27.014 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.26.6/26] handle="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" host="ci-4081.3.1-6-670b8c47e7" Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:27.014 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:27.127349 containerd[1469]: 2025-02-13 20:20:27.014 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.6/26] IPv6=[] ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.029 [INFO][4466] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0", GenerateName:"calico-kube-controllers-56dc9cc855-", Namespace:"calico-system", SelfLink:"", UID:"7c64b4ea-ed0c-443b-a61d-284300b0cf5b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56dc9cc855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"", Pod:"calico-kube-controllers-56dc9cc855-j5dzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali893ed243fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.032 [INFO][4466] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.26.6/32] ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.033 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali893ed243fee ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.062 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.078 [INFO][4466] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0", GenerateName:"calico-kube-controllers-56dc9cc855-", Namespace:"calico-system", SelfLink:"", UID:"7c64b4ea-ed0c-443b-a61d-284300b0cf5b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56dc9cc855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb", Pod:"calico-kube-controllers-56dc9cc855-j5dzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali893ed243fee", MAC:"9a:d6:fb:8e:e2:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:27.128731 containerd[1469]: 2025-02-13 20:20:27.117 [INFO][4466] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Namespace="calico-system" Pod="calico-kube-controllers-56dc9cc855-j5dzk" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:27.177159 containerd[1469]: time="2025-02-13T20:20:27.175674237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:20:27.178168 containerd[1469]: time="2025-02-13T20:20:27.176880896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:20:27.178168 containerd[1469]: time="2025-02-13T20:20:27.176902819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:27.178168 containerd[1469]: time="2025-02-13T20:20:27.177139178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:20:27.219061 systemd[1]: Started cri-containerd-e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb.scope - libcontainer container e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb. Feb 13 20:20:27.228001 containerd[1469]: time="2025-02-13T20:20:27.226743390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c6j9m,Uid:2e35ffa3-87a6-4203-a3ca-0abebfa17931,Namespace:kube-system,Attempt:1,} returns sandbox id \"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624\"" Feb 13 20:20:27.231020 kubelet[2579]: E0213 20:20:27.230122 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:27.235748 containerd[1469]: time="2025-02-13T20:20:27.235022078Z" level=info msg="CreateContainer within sandbox \"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:20:27.290938 containerd[1469]: time="2025-02-13T20:20:27.290665335Z" level=info msg="CreateContainer within sandbox \"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecd208c5520b79b31283c07f1b82ee1c0d5b65344f5761c03e41a34dab956ebe\"" Feb 13 20:20:27.293696 containerd[1469]: time="2025-02-13T20:20:27.292566052Z" level=info msg="StartContainer for \"ecd208c5520b79b31283c07f1b82ee1c0d5b65344f5761c03e41a34dab956ebe\"" Feb 13 20:20:27.437862 kubelet[2579]: E0213 20:20:27.403712 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:27.438591 systemd[1]: Started cri-containerd-ecd208c5520b79b31283c07f1b82ee1c0d5b65344f5761c03e41a34dab956ebe.scope - libcontainer container ecd208c5520b79b31283c07f1b82ee1c0d5b65344f5761c03e41a34dab956ebe. Feb 13 20:20:27.457984 containerd[1469]: time="2025-02-13T20:20:27.457926472Z" level=info msg="StartContainer for \"489d4d6d5043027a079e50822be7ba5c16300812f153500775ef24a53f0d2970\" returns successfully" Feb 13 20:20:27.523687 containerd[1469]: time="2025-02-13T20:20:27.523623182Z" level=info msg="StartContainer for \"ecd208c5520b79b31283c07f1b82ee1c0d5b65344f5761c03e41a34dab956ebe\" returns successfully" Feb 13 20:20:27.670706 containerd[1469]: time="2025-02-13T20:20:27.670600974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56dc9cc855-j5dzk,Uid:7c64b4ea-ed0c-443b-a61d-284300b0cf5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\"" Feb 13 20:20:28.408325 kubelet[2579]: E0213 20:20:28.408232 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:28.417473 kubelet[2579]: E0213 20:20:28.417244 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:28.449611 kubelet[2579]: I0213 20:20:28.448731 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c6j9m" podStartSLOduration=40.448705528 podStartE2EDuration="40.448705528s" podCreationTimestamp="2025-02-13 20:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:20:28.448526031 +0000 UTC m=+54.725118921" watchObservedRunningTime="2025-02-13 20:20:28.448705528 +0000 UTC m=+54.725298413" Feb 13 20:20:28.462572 systemd-networkd[1365]: cali34922017e51: Gained IPv6LL Feb 13 20:20:29.058591 systemd-networkd[1365]: cali893ed243fee: Gained IPv6LL Feb 13 20:20:29.418048 kubelet[2579]: E0213 20:20:29.417907 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:30.333264 containerd[1469]: time="2025-02-13T20:20:30.332143378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:30.334605 containerd[1469]: time="2025-02-13T20:20:30.334523710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:20:30.336358 containerd[1469]: time="2025-02-13T20:20:30.336252219Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:30.340706 containerd[1469]: time="2025-02-13T20:20:30.340545676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:30.342135 containerd[1469]: time="2025-02-13T20:20:30.342057380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.528360733s" Feb 13 20:20:30.342135 containerd[1469]: time="2025-02-13T20:20:30.342123330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:20:30.348028 containerd[1469]: time="2025-02-13T20:20:30.345983313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:20:30.375609 containerd[1469]: time="2025-02-13T20:20:30.375549528Z" level=info msg="CreateContainer within sandbox \"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:20:30.411076 containerd[1469]: time="2025-02-13T20:20:30.410990081Z" level=info msg="CreateContainer within sandbox \"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee3a898238c03529e323b618335b7a7605f447e4db94efdc007b9eede0741a72\"" Feb 13 20:20:30.413661 containerd[1469]: time="2025-02-13T20:20:30.413213273Z" level=info msg="StartContainer for \"ee3a898238c03529e323b618335b7a7605f447e4db94efdc007b9eede0741a72\"" Feb 13 20:20:30.495992 kubelet[2579]: E0213 20:20:30.495841 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:30.541722 systemd[1]: Started cri-containerd-ee3a898238c03529e323b618335b7a7605f447e4db94efdc007b9eede0741a72.scope - libcontainer container ee3a898238c03529e323b618335b7a7605f447e4db94efdc007b9eede0741a72. Feb 13 20:20:30.626670 containerd[1469]: time="2025-02-13T20:20:30.626142822Z" level=info msg="StartContainer for \"ee3a898238c03529e323b618335b7a7605f447e4db94efdc007b9eede0741a72\" returns successfully" Feb 13 20:20:30.787412 containerd[1469]: time="2025-02-13T20:20:30.787251045Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:30.789826 containerd[1469]: time="2025-02-13T20:20:30.789037097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:20:30.791731 containerd[1469]: time="2025-02-13T20:20:30.791679375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 445.656002ms" Feb 13 20:20:30.791929 containerd[1469]: time="2025-02-13T20:20:30.791911060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:20:30.793814 containerd[1469]: time="2025-02-13T20:20:30.793777141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:20:30.798270 containerd[1469]: time="2025-02-13T20:20:30.798219701Z" level=info msg="CreateContainer within sandbox \"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:20:30.822003 containerd[1469]: time="2025-02-13T20:20:30.821851719Z" level=info msg="CreateContainer within sandbox \"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88b0f74965b705d012f5cfafb6e15482ed90248783319c78bce070cf8bfdf6fe\"" Feb 13 20:20:30.823013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364833299.mount: Deactivated successfully. Feb 13 20:20:30.827273 containerd[1469]: time="2025-02-13T20:20:30.824936710Z" level=info msg="StartContainer for \"88b0f74965b705d012f5cfafb6e15482ed90248783319c78bce070cf8bfdf6fe\"" Feb 13 20:20:30.869560 systemd[1]: Started cri-containerd-88b0f74965b705d012f5cfafb6e15482ed90248783319c78bce070cf8bfdf6fe.scope - libcontainer container 88b0f74965b705d012f5cfafb6e15482ed90248783319c78bce070cf8bfdf6fe. Feb 13 20:20:30.939330 containerd[1469]: time="2025-02-13T20:20:30.938995164Z" level=info msg="StartContainer for \"88b0f74965b705d012f5cfafb6e15482ed90248783319c78bce070cf8bfdf6fe\" returns successfully" Feb 13 20:20:31.482351 kubelet[2579]: I0213 20:20:31.482198 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59955948f8-xl9kp" podStartSLOduration=30.177295418 podStartE2EDuration="35.482168532s" podCreationTimestamp="2025-02-13 20:19:56 +0000 UTC" firstStartedPulling="2025-02-13 20:20:25.040128097 +0000 UTC m=+51.316720961" lastFinishedPulling="2025-02-13 20:20:30.345001195 +0000 UTC m=+56.621594075" observedRunningTime="2025-02-13 20:20:31.456280386 +0000 UTC m=+57.732873270" watchObservedRunningTime="2025-02-13 20:20:31.482168532 +0000 UTC m=+57.758761417" Feb 13 20:20:32.174283 kubelet[2579]: I0213 20:20:32.174169 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59955948f8-pf587" podStartSLOduration=30.53536501 podStartE2EDuration="36.174126747s" podCreationTimestamp="2025-02-13 20:19:56 +0000 UTC" firstStartedPulling="2025-02-13 20:20:25.154856381 +0000 UTC m=+51.431449257" lastFinishedPulling="2025-02-13 20:20:30.79361813 +0000 UTC m=+57.070210994" observedRunningTime="2025-02-13 20:20:31.485993859 +0000 UTC m=+57.762586746" watchObservedRunningTime="2025-02-13 20:20:32.174126747 +0000 UTC m=+58.450719628" Feb 13 20:20:32.232916 kubelet[2579]: E0213 20:20:32.232785 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:32.445652 kubelet[2579]: I0213 20:20:32.444981 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:20:32.620736 containerd[1469]: time="2025-02-13T20:20:32.620608391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:32.622704 containerd[1469]: time="2025-02-13T20:20:32.622637539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:20:32.629457 containerd[1469]: time="2025-02-13T20:20:32.629361336Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:32.631228 containerd[1469]: time="2025-02-13T20:20:32.631030588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:32.631927 containerd[1469]: time="2025-02-13T20:20:32.631860997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.837706031s" Feb 13 20:20:32.631927 containerd[1469]: time="2025-02-13T20:20:32.631900723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:20:32.634715 containerd[1469]: time="2025-02-13T20:20:32.634247074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:20:32.637416 containerd[1469]: time="2025-02-13T20:20:32.637382576Z" level=info msg="CreateContainer within sandbox \"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:20:32.661495 containerd[1469]: time="2025-02-13T20:20:32.661424981Z" level=info msg="CreateContainer within sandbox \"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2\"" Feb 13 20:20:32.663488 containerd[1469]: time="2025-02-13T20:20:32.663439555Z" level=info msg="StartContainer for \"d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2\"" Feb 13 20:20:32.713498 systemd[1]: run-containerd-runc-k8s.io-d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2-runc.6s5Y03.mount: Deactivated successfully. Feb 13 20:20:32.724580 systemd[1]: Started cri-containerd-d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2.scope - libcontainer container d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2. Feb 13 20:20:32.784653 containerd[1469]: time="2025-02-13T20:20:32.784388797Z" level=info msg="StartContainer for \"d10eaf5f095f427b549fd0dbeb0869d07ef4b339d863cddcff883f73f20d9fe2\" returns successfully" Feb 13 20:20:33.179491 kubelet[2579]: I0213 20:20:33.179057 2579 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:20:33.180470 kubelet[2579]: I0213 20:20:33.180429 2579 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:20:33.484375 kubelet[2579]: I0213 20:20:33.484204 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ws7h9" podStartSLOduration=29.964153106 podStartE2EDuration="38.484183976s" podCreationTimestamp="2025-02-13 20:19:55 +0000 UTC" firstStartedPulling="2025-02-13 20:20:24.113608039 +0000 UTC m=+50.390200909" lastFinishedPulling="2025-02-13 20:20:32.633638893 +0000 UTC m=+58.910231779" observedRunningTime="2025-02-13 20:20:33.481367174 +0000 UTC m=+59.757960057" watchObservedRunningTime="2025-02-13 20:20:33.484183976 +0000 UTC m=+59.760776859" Feb 13 20:20:33.934389 containerd[1469]: time="2025-02-13T20:20:33.933853332Z" level=info msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:33.997 [WARNING][4869] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fae6f7a6-86bf-475d-928d-3783899e47e1", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66", Pod:"coredns-7db6d8ff4d-6hp4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib34c8447187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:33.999 [INFO][4869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:33.999 [INFO][4869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" iface="eth0" netns="" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.000 [INFO][4869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.000 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.053 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.053 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.053 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.062 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.062 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.065 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.070814 containerd[1469]: 2025-02-13 20:20:34.067 [INFO][4869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.072133 containerd[1469]: time="2025-02-13T20:20:34.070853197Z" level=info msg="TearDown network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" successfully" Feb 13 20:20:34.072133 containerd[1469]: time="2025-02-13T20:20:34.070878291Z" level=info msg="StopPodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" returns successfully" Feb 13 20:20:34.072736 containerd[1469]: time="2025-02-13T20:20:34.072698859Z" level=info msg="RemovePodSandbox for \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" Feb 13 20:20:34.072834 containerd[1469]: time="2025-02-13T20:20:34.072738160Z" level=info msg="Forcibly stopping sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\"" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.129 [WARNING][4894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fae6f7a6-86bf-475d-928d-3783899e47e1", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"ee25b271d78d83a7aab5c5f233d06312ee24d713114a70598be2c8274e426e66", Pod:"coredns-7db6d8ff4d-6hp4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib34c8447187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.130 [INFO][4894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.130 [INFO][4894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" iface="eth0" netns="" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.130 [INFO][4894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.131 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.167 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.168 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.168 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.186 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.186 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" HandleID="k8s-pod-network.0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--6hp4f-eth0" Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.190 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.195047 containerd[1469]: 2025-02-13 20:20:34.192 [INFO][4894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d" Feb 13 20:20:34.195047 containerd[1469]: time="2025-02-13T20:20:34.195025784Z" level=info msg="TearDown network for sandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" successfully" Feb 13 20:20:34.200611 containerd[1469]: time="2025-02-13T20:20:34.200540878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:34.200760 containerd[1469]: time="2025-02-13T20:20:34.200646634Z" level=info msg="RemovePodSandbox \"0d30e81f7ce326939fd45db43e607474465c184dca64bf9c4018c10026c9085d\" returns successfully" Feb 13 20:20:34.201457 containerd[1469]: time="2025-02-13T20:20:34.201422900Z" level=info msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.255 [WARNING][4918] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e35ffa3-87a6-4203-a3ca-0abebfa17931", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624", Pod:"coredns-7db6d8ff4d-c6j9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34922017e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.255 [INFO][4918] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.255 [INFO][4918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" iface="eth0" netns="" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.255 [INFO][4918] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.255 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.287 [INFO][4924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.287 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.287 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.294 [WARNING][4924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.294 [INFO][4924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.297 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.302239 containerd[1469]: 2025-02-13 20:20:34.299 [INFO][4918] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.305344 containerd[1469]: time="2025-02-13T20:20:34.302459379Z" level=info msg="TearDown network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" successfully" Feb 13 20:20:34.305344 containerd[1469]: time="2025-02-13T20:20:34.302515320Z" level=info msg="StopPodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" returns successfully" Feb 13 20:20:34.305344 containerd[1469]: time="2025-02-13T20:20:34.303740573Z" level=info msg="RemovePodSandbox for \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" Feb 13 20:20:34.305344 containerd[1469]: time="2025-02-13T20:20:34.303787768Z" level=info msg="Forcibly stopping sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\"" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.373 [WARNING][4942] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e35ffa3-87a6-4203-a3ca-0abebfa17931", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"db95a6728ec16d474479340584f6ac5d5ef6356bc3565e4e332253c140fa3624", Pod:"coredns-7db6d8ff4d-c6j9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34922017e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.374 [INFO][4942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.374 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" iface="eth0" netns="" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.374 [INFO][4942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.374 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.430 [INFO][4948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.431 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.431 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.447 [WARNING][4948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.452 [INFO][4948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" HandleID="k8s-pod-network.6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Workload="ci--4081.3.1--6--670b8c47e7-k8s-coredns--7db6d8ff4d--c6j9m-eth0" Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.459 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.470949 containerd[1469]: 2025-02-13 20:20:34.464 [INFO][4942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244" Feb 13 20:20:34.470949 containerd[1469]: time="2025-02-13T20:20:34.470824858Z" level=info msg="TearDown network for sandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" successfully" Feb 13 20:20:34.478525 containerd[1469]: time="2025-02-13T20:20:34.478483420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:34.478988 containerd[1469]: time="2025-02-13T20:20:34.478791931Z" level=info msg="RemovePodSandbox \"6f36f09e10c6d3c8f2c29305221b0f7900b36ca3ae434464485bc5b6bcf04244\" returns successfully" Feb 13 20:20:34.479687 containerd[1469]: time="2025-02-13T20:20:34.479665783Z" level=info msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.572 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22ae9f28-0bd2-4232-81cb-1eee6e72e721", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c", Pod:"csi-node-driver-ws7h9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7a361437a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.573 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.573 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" iface="eth0" netns="" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.573 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.573 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.612 [INFO][4975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.612 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.613 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.621 [WARNING][4975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.622 [INFO][4975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.626 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.633543 containerd[1469]: 2025-02-13 20:20:34.630 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.635735 containerd[1469]: time="2025-02-13T20:20:34.634909296Z" level=info msg="TearDown network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" successfully" Feb 13 20:20:34.635735 containerd[1469]: time="2025-02-13T20:20:34.634942339Z" level=info msg="StopPodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" returns successfully" Feb 13 20:20:34.637893 containerd[1469]: time="2025-02-13T20:20:34.637263678Z" level=info msg="RemovePodSandbox for \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" Feb 13 20:20:34.637893 containerd[1469]: time="2025-02-13T20:20:34.637328201Z" level=info msg="Forcibly stopping sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\"" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.703 [WARNING][4994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22ae9f28-0bd2-4232-81cb-1eee6e72e721", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"5f6b303fda4e76bd603e43896abae6135d8f5ce1092f03bf89ed3bf2af3d7d9c", Pod:"csi-node-driver-ws7h9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7a361437a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.703 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.703 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" iface="eth0" netns="" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.703 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.703 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.737 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.737 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.737 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.748 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.749 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" HandleID="k8s-pod-network.f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Workload="ci--4081.3.1--6--670b8c47e7-k8s-csi--node--driver--ws7h9-eth0" Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.751 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.759409 containerd[1469]: 2025-02-13 20:20:34.755 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68" Feb 13 20:20:34.759409 containerd[1469]: time="2025-02-13T20:20:34.759378990Z" level=info msg="TearDown network for sandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" successfully" Feb 13 20:20:34.764798 containerd[1469]: time="2025-02-13T20:20:34.764644659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:34.764798 containerd[1469]: time="2025-02-13T20:20:34.764738298Z" level=info msg="RemovePodSandbox \"f8e153798e06d26ea387ba15979c98311c74f4f55771920834b15db20ac22b68\" returns successfully" Feb 13 20:20:34.766107 containerd[1469]: time="2025-02-13T20:20:34.766026316Z" level=info msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.870 [WARNING][5020] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"960b37af-3575-4783-9059-3054ce49019f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984", Pod:"calico-apiserver-59955948f8-xl9kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d2c656b53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.871 [INFO][5020] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.871 [INFO][5020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" iface="eth0" netns="" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.871 [INFO][5020] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.871 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.924 [INFO][5026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.925 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.925 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.945 [WARNING][5026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.945 [INFO][5026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.951 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:34.959198 containerd[1469]: 2025-02-13 20:20:34.956 [INFO][5020] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:34.959198 containerd[1469]: time="2025-02-13T20:20:34.958971411Z" level=info msg="TearDown network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" successfully" Feb 13 20:20:34.959198 containerd[1469]: time="2025-02-13T20:20:34.959009360Z" level=info msg="StopPodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" returns successfully" Feb 13 20:20:34.961812 containerd[1469]: time="2025-02-13T20:20:34.960280664Z" level=info msg="RemovePodSandbox for \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" Feb 13 20:20:34.961812 containerd[1469]: time="2025-02-13T20:20:34.960597359Z" level=info msg="Forcibly stopping sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\"" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.045 [WARNING][5044] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"960b37af-3575-4783-9059-3054ce49019f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"3f3dfbbf46c50fc79506a204eefcb16966c4640c35581050c38839b50de1d984", Pod:"calico-apiserver-59955948f8-xl9kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d2c656b53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.046 [INFO][5044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.046 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" iface="eth0" netns="" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.046 [INFO][5044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.046 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.088 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.088 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.088 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.099 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.099 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" HandleID="k8s-pod-network.44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--xl9kp-eth0" Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.104 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:35.111564 containerd[1469]: 2025-02-13 20:20:35.108 [INFO][5044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963" Feb 13 20:20:35.111564 containerd[1469]: time="2025-02-13T20:20:35.111422859Z" level=info msg="TearDown network for sandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" successfully" Feb 13 20:20:35.116478 containerd[1469]: time="2025-02-13T20:20:35.116428001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:35.117227 containerd[1469]: time="2025-02-13T20:20:35.116531491Z" level=info msg="RemovePodSandbox \"44be8f03873e4f00fb63877d4b994bf4ea9528923e325fc7b2a7693151ad1963\" returns successfully" Feb 13 20:20:35.118161 containerd[1469]: time="2025-02-13T20:20:35.118121073Z" level=info msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.182 [WARNING][5069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead", Pod:"calico-apiserver-59955948f8-pf587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395441415da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.182 [INFO][5069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.182 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" iface="eth0" netns="" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.182 [INFO][5069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.182 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.237 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.238 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.238 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.249 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.249 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.251 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:35.259946 containerd[1469]: 2025-02-13 20:20:35.255 [INFO][5069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.259946 containerd[1469]: time="2025-02-13T20:20:35.258656820Z" level=info msg="TearDown network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" successfully" Feb 13 20:20:35.259946 containerd[1469]: time="2025-02-13T20:20:35.258716335Z" level=info msg="StopPodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" returns successfully" Feb 13 20:20:35.262369 containerd[1469]: time="2025-02-13T20:20:35.261848768Z" level=info msg="RemovePodSandbox for \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" Feb 13 20:20:35.262369 containerd[1469]: time="2025-02-13T20:20:35.261906411Z" level=info msg="Forcibly stopping sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\"" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.353 [WARNING][5094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0", GenerateName:"calico-apiserver-59955948f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e72eb3b-0738-4c0b-a16d-dc0fdc609fa6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955948f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"7585b1e8073233334733d10aabc46a1cc1f2777d9885dbb04a811eac630fdead", Pod:"calico-apiserver-59955948f8-pf587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395441415da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.355 [INFO][5094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.355 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" iface="eth0" netns="" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.355 [INFO][5094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.355 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.403 [INFO][5101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.403 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.403 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.412 [WARNING][5101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.412 [INFO][5101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" HandleID="k8s-pod-network.f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--apiserver--59955948f8--pf587-eth0" Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.417 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:35.424189 containerd[1469]: 2025-02-13 20:20:35.421 [INFO][5094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027" Feb 13 20:20:35.424189 containerd[1469]: time="2025-02-13T20:20:35.424143086Z" level=info msg="TearDown network for sandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" successfully" Feb 13 20:20:35.455629 containerd[1469]: time="2025-02-13T20:20:35.455568069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:35.457261 containerd[1469]: time="2025-02-13T20:20:35.455688180Z" level=info msg="RemovePodSandbox \"f2e924e74f76b610ee585e83cf4bc1ed3ddddeb290fe3db5e9bf94bfa462c027\" returns successfully" Feb 13 20:20:35.458065 containerd[1469]: time="2025-02-13T20:20:35.458033690Z" level=info msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.604 [WARNING][5124] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0", GenerateName:"calico-kube-controllers-56dc9cc855-", Namespace:"calico-system", SelfLink:"", UID:"7c64b4ea-ed0c-443b-a61d-284300b0cf5b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56dc9cc855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb", Pod:"calico-kube-controllers-56dc9cc855-j5dzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali893ed243fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.604 [INFO][5124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.604 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" iface="eth0" netns="" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.604 [INFO][5124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.605 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.660 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.660 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.661 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.672 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.672 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.677 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:35.686478 containerd[1469]: 2025-02-13 20:20:35.683 [INFO][5124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.686478 containerd[1469]: time="2025-02-13T20:20:35.686220080Z" level=info msg="TearDown network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" successfully" Feb 13 20:20:35.686478 containerd[1469]: time="2025-02-13T20:20:35.686251201Z" level=info msg="StopPodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" returns successfully" Feb 13 20:20:35.688629 containerd[1469]: time="2025-02-13T20:20:35.688598791Z" level=info msg="RemovePodSandbox for \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" Feb 13 20:20:35.688747 containerd[1469]: time="2025-02-13T20:20:35.688723548Z" level=info msg="Forcibly stopping sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\"" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.776 [WARNING][5148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0", GenerateName:"calico-kube-controllers-56dc9cc855-", Namespace:"calico-system", SelfLink:"", UID:"7c64b4ea-ed0c-443b-a61d-284300b0cf5b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 19, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56dc9cc855", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-6-670b8c47e7", ContainerID:"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb", Pod:"calico-kube-controllers-56dc9cc855-j5dzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali893ed243fee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.777 [INFO][5148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.777 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" iface="eth0" netns="" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.777 [INFO][5148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.777 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.836 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.837 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.837 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.889 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.890 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" HandleID="k8s-pod-network.41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.898 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:35.910336 containerd[1469]: 2025-02-13 20:20:35.905 [INFO][5148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d" Feb 13 20:20:35.910336 containerd[1469]: time="2025-02-13T20:20:35.910193099Z" level=info msg="TearDown network for sandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" successfully" Feb 13 20:20:35.923884 containerd[1469]: time="2025-02-13T20:20:35.923280152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:20:35.923884 containerd[1469]: time="2025-02-13T20:20:35.923382186Z" level=info msg="RemovePodSandbox \"41cdcf642babebab96e99919b5570b36e548b0a39e4c557c345f37011ba63f8d\" returns successfully" Feb 13 20:20:36.241724 systemd[1]: Started sshd@9-165.232.153.54:22-147.75.109.163:51920.service - OpenSSH per-connection server daemon (147.75.109.163:51920). Feb 13 20:20:36.405404 sshd[5162]: Accepted publickey for core from 147.75.109.163 port 51920 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:36.409171 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:36.426895 systemd-logind[1441]: New session 8 of user core. Feb 13 20:20:36.432670 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:20:36.881446 containerd[1469]: time="2025-02-13T20:20:36.881375228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:36.885035 containerd[1469]: time="2025-02-13T20:20:36.884941003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:20:36.889117 containerd[1469]: time="2025-02-13T20:20:36.886908551Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:36.892068 containerd[1469]: time="2025-02-13T20:20:36.892021163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:20:36.893370 containerd[1469]: time="2025-02-13T20:20:36.893205220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.258920221s" Feb 13 20:20:36.893669 containerd[1469]: time="2025-02-13T20:20:36.893633622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:20:36.952167 containerd[1469]: time="2025-02-13T20:20:36.952120542Z" level=info msg="CreateContainer within sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:20:36.972437 containerd[1469]: time="2025-02-13T20:20:36.972379272Z" level=info msg="CreateContainer within sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\"" Feb 13 20:20:36.974990 containerd[1469]: time="2025-02-13T20:20:36.974952793Z" level=info msg="StartContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\"" Feb 13 20:20:37.061925 systemd[1]: Started cri-containerd-645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c.scope - libcontainer container 645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c. Feb 13 20:20:37.176453 containerd[1469]: time="2025-02-13T20:20:37.175944035Z" level=info msg="StartContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" returns successfully" Feb 13 20:20:37.474082 sshd[5162]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:37.480706 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:20:37.481079 systemd[1]: sshd@9-165.232.153.54:22-147.75.109.163:51920.service: Deactivated successfully. Feb 13 20:20:37.485895 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:20:37.488759 systemd-logind[1441]: Removed session 8. Feb 13 20:20:37.549415 kubelet[2579]: I0213 20:20:37.546978 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56dc9cc855-j5dzk" podStartSLOduration=33.324142544 podStartE2EDuration="42.546947196s" podCreationTimestamp="2025-02-13 20:19:55 +0000 UTC" firstStartedPulling="2025-02-13 20:20:27.673351718 +0000 UTC m=+53.949944596" lastFinishedPulling="2025-02-13 20:20:36.896156385 +0000 UTC m=+63.172749248" observedRunningTime="2025-02-13 20:20:37.546388145 +0000 UTC m=+63.822981034" watchObservedRunningTime="2025-02-13 20:20:37.546947196 +0000 UTC m=+63.823540081" Feb 13 20:20:39.985865 systemd[1]: run-containerd-runc-k8s.io-645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c-runc.kxO3Ah.mount: Deactivated successfully. Feb 13 20:20:42.499602 systemd[1]: Started sshd@10-165.232.153.54:22-147.75.109.163:45238.service - OpenSSH per-connection server daemon (147.75.109.163:45238). Feb 13 20:20:42.594117 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 45238 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:42.596096 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:42.601749 systemd-logind[1441]: New session 9 of user core. Feb 13 20:20:42.608552 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:20:42.787741 sshd[5261]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:42.793943 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:20:42.794104 systemd[1]: sshd@10-165.232.153.54:22-147.75.109.163:45238.service: Deactivated successfully. Feb 13 20:20:42.797130 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:20:42.798390 systemd-logind[1441]: Removed session 9. Feb 13 20:20:44.922338 kubelet[2579]: E0213 20:20:44.921934 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:47.809869 systemd[1]: Started sshd@11-165.232.153.54:22-147.75.109.163:45246.service - OpenSSH per-connection server daemon (147.75.109.163:45246). Feb 13 20:20:47.885934 sshd[5284]: Accepted publickey for core from 147.75.109.163 port 45246 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:47.888744 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:47.897291 systemd-logind[1441]: New session 10 of user core. Feb 13 20:20:47.904776 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:20:48.104664 sshd[5284]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:48.113112 systemd[1]: sshd@11-165.232.153.54:22-147.75.109.163:45246.service: Deactivated successfully. Feb 13 20:20:48.117153 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:20:48.122537 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:20:48.125586 systemd-logind[1441]: Removed session 10. Feb 13 20:20:49.173809 systemd[1]: Started sshd@12-165.232.153.54:22-218.92.0.167:21296.service - OpenSSH per-connection server daemon (218.92.0.167:21296). Feb 13 20:20:49.244783 systemd[1]: Started sshd@13-165.232.153.54:22-218.92.0.167:21570.service - OpenSSH per-connection server daemon (218.92.0.167:21570). Feb 13 20:20:50.360435 sshd[5305]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:50.535081 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:52.374553 sshd[5298]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:52.548158 sshd[5301]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:52.692112 sshd[5307]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:52.900708 sshd[5308]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:53.122765 systemd[1]: Started sshd@14-165.232.153.54:22-147.75.109.163:33952.service - OpenSSH per-connection server daemon (147.75.109.163:33952). Feb 13 20:20:53.175388 sshd[5310]: Accepted publickey for core from 147.75.109.163 port 33952 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:53.177260 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:53.184839 systemd-logind[1441]: New session 11 of user core. Feb 13 20:20:53.190604 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:20:53.370177 sshd[5310]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:53.378447 systemd[1]: sshd@14-165.232.153.54:22-147.75.109.163:33952.service: Deactivated successfully. Feb 13 20:20:53.382625 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:20:53.384384 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:20:53.385976 systemd-logind[1441]: Removed session 11. Feb 13 20:20:54.981408 sshd[5298]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:55.191410 sshd[5301]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:55.298292 sshd[5324]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:55.543282 sshd[5325]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:20:56.542068 containerd[1469]: time="2025-02-13T20:20:56.541753118Z" level=info msg="StopContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" with timeout 300 (s)" Feb 13 20:20:56.547768 containerd[1469]: time="2025-02-13T20:20:56.547543775Z" level=info msg="Stop container \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" with signal terminated" Feb 13 20:20:56.754372 containerd[1469]: time="2025-02-13T20:20:56.754283256Z" level=info msg="StopContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" with timeout 30 (s)" Feb 13 20:20:56.755645 containerd[1469]: time="2025-02-13T20:20:56.755477618Z" level=info msg="Stop container \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" with signal terminated" Feb 13 20:20:56.783283 systemd[1]: cri-containerd-645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c.scope: Deactivated successfully. Feb 13 20:20:56.856194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c-rootfs.mount: Deactivated successfully. Feb 13 20:20:56.983846 containerd[1469]: time="2025-02-13T20:20:56.950076133Z" level=info msg="shim disconnected" id=645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c namespace=k8s.io Feb 13 20:20:56.997893 containerd[1469]: time="2025-02-13T20:20:56.997617759Z" level=warning msg="cleaning up after shim disconnected" id=645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c namespace=k8s.io Feb 13 20:20:56.997893 containerd[1469]: time="2025-02-13T20:20:56.997670293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:57.070353 containerd[1469]: time="2025-02-13T20:20:57.069216838Z" level=info msg="StopContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" returns successfully" Feb 13 20:20:57.070835 containerd[1469]: time="2025-02-13T20:20:57.070806394Z" level=info msg="StopPodSandbox for \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\"" Feb 13 20:20:57.071035 containerd[1469]: time="2025-02-13T20:20:57.071010394Z" level=info msg="Container to stop \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:20:57.075877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb-shm.mount: Deactivated successfully. Feb 13 20:20:57.087722 systemd[1]: cri-containerd-e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb.scope: Deactivated successfully. Feb 13 20:20:57.124480 containerd[1469]: time="2025-02-13T20:20:57.123536204Z" level=info msg="shim disconnected" id=e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb namespace=k8s.io Feb 13 20:20:57.124480 containerd[1469]: time="2025-02-13T20:20:57.123611742Z" level=warning msg="cleaning up after shim disconnected" id=e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb namespace=k8s.io Feb 13 20:20:57.124480 containerd[1469]: time="2025-02-13T20:20:57.123621536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:20:57.129170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb-rootfs.mount: Deactivated successfully. Feb 13 20:20:57.299090 systemd-networkd[1365]: cali893ed243fee: Link DOWN Feb 13 20:20:57.299824 systemd-networkd[1365]: cali893ed243fee: Lost carrier Feb 13 20:20:57.338364 sshd[5298]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.295 [INFO][5412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.295 [INFO][5412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" iface="eth0" netns="/var/run/netns/cni-c159551e-2d9e-2b04-557d-b1c5be4e91ab" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.296 [INFO][5412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" iface="eth0" netns="/var/run/netns/cni-c159551e-2d9e-2b04-557d-b1c5be4e91ab" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.307 [INFO][5412] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" after=11.527278ms iface="eth0" netns="/var/run/netns/cni-c159551e-2d9e-2b04-557d-b1c5be4e91ab" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.307 [INFO][5412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.308 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.374 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.374 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.374 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.455 [INFO][5418] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.455 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.458 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:20:57.471360 containerd[1469]: 2025-02-13 20:20:57.463 [INFO][5412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:20:57.471360 containerd[1469]: time="2025-02-13T20:20:57.468733703Z" level=info msg="TearDown network for sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" successfully" Feb 13 20:20:57.471360 containerd[1469]: time="2025-02-13T20:20:57.468801105Z" level=info msg="StopPodSandbox for \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" returns successfully" Feb 13 20:20:57.474112 systemd[1]: run-netns-cni\x2dc159551e\x2d2d9e\x2d2b04\x2d557d\x2db1c5be4e91ab.mount: Deactivated successfully. Feb 13 20:20:57.500440 sshd[5298]: Received disconnect from 218.92.0.167 port 21296:11: [preauth] Feb 13 20:20:57.500440 sshd[5298]: Disconnected from authenticating user root 218.92.0.167 port 21296 [preauth] Feb 13 20:20:57.503904 systemd[1]: sshd@12-165.232.153.54:22-218.92.0.167:21296.service: Deactivated successfully. Feb 13 20:20:57.578075 sshd[5301]: PAM: Permission denied for root from 218.92.0.167 Feb 13 20:20:57.601200 kubelet[2579]: I0213 20:20:57.600608 2579 scope.go:117] "RemoveContainer" containerID="645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c" Feb 13 20:20:57.603863 containerd[1469]: time="2025-02-13T20:20:57.603356428Z" level=info msg="RemoveContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\"" Feb 13 20:20:57.612415 containerd[1469]: time="2025-02-13T20:20:57.612357598Z" level=info msg="RemoveContainer for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" returns successfully" Feb 13 20:20:57.620794 kubelet[2579]: I0213 20:20:57.620537 2579 scope.go:117] "RemoveContainer" containerID="645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c" Feb 13 20:20:57.636280 kubelet[2579]: I0213 20:20:57.635339 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-tigera-ca-bundle\") pod \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\" (UID: \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\") " Feb 13 20:20:57.636280 kubelet[2579]: I0213 20:20:57.635462 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkvgv\" (UniqueName: \"kubernetes.io/projected/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-kube-api-access-kkvgv\") pod \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\" (UID: \"7c64b4ea-ed0c-443b-a61d-284300b0cf5b\") " Feb 13 20:20:57.643330 containerd[1469]: time="2025-02-13T20:20:57.631561227Z" level=error msg="ContainerStatus for \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\": not found" Feb 13 20:20:57.690106 systemd[1]: var-lib-kubelet-pods-7c64b4ea\x2ded0c\x2d443b\x2da61d\x2d284300b0cf5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkvgv.mount: Deactivated successfully. Feb 13 20:20:57.692087 kubelet[2579]: E0213 20:20:57.692010 2579 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\": not found" containerID="645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c" Feb 13 20:20:57.697700 kubelet[2579]: I0213 20:20:57.694374 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-kube-api-access-kkvgv" (OuterVolumeSpecName: "kube-api-access-kkvgv") pod "7c64b4ea-ed0c-443b-a61d-284300b0cf5b" (UID: "7c64b4ea-ed0c-443b-a61d-284300b0cf5b"). InnerVolumeSpecName "kube-api-access-kkvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:20:57.698609 kubelet[2579]: I0213 20:20:57.698538 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c"} err="failed to get container status \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"645889c280532298e9b9d1cb6571623f80f02057ec6f5e75672c7427e93c2d1c\": not found" Feb 13 20:20:57.699348 kubelet[2579]: I0213 20:20:57.699293 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7c64b4ea-ed0c-443b-a61d-284300b0cf5b" (UID: "7c64b4ea-ed0c-443b-a61d-284300b0cf5b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:20:57.735974 kubelet[2579]: I0213 20:20:57.735802 2579 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-tigera-ca-bundle\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:20:57.735974 kubelet[2579]: I0213 20:20:57.735863 2579 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kkvgv\" (UniqueName: \"kubernetes.io/projected/7c64b4ea-ed0c-443b-a61d-284300b0cf5b-kube-api-access-kkvgv\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:20:57.754256 sshd[5301]: Received disconnect from 218.92.0.167 port 21570:11: [preauth] Feb 13 20:20:57.754256 sshd[5301]: Disconnected from authenticating user root 218.92.0.167 port 21570 [preauth] Feb 13 20:20:57.758381 systemd[1]: sshd@13-165.232.153.54:22-218.92.0.167:21570.service: Deactivated successfully. Feb 13 20:20:57.855742 systemd[1]: var-lib-kubelet-pods-7c64b4ea\x2ded0c\x2d443b\x2da61d\x2d284300b0cf5b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 13 20:20:57.913589 systemd[1]: Removed slice kubepods-besteffort-pod7c64b4ea_ed0c_443b_a61d_284300b0cf5b.slice - libcontainer container kubepods-besteffort-pod7c64b4ea_ed0c_443b_a61d_284300b0cf5b.slice. Feb 13 20:20:58.390907 systemd[1]: Started sshd@15-165.232.153.54:22-147.75.109.163:33962.service - OpenSSH per-connection server daemon (147.75.109.163:33962). Feb 13 20:20:58.505128 sshd[5445]: Accepted publickey for core from 147.75.109.163 port 33962 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:58.509482 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:58.518442 systemd-logind[1441]: New session 12 of user core. Feb 13 20:20:58.523559 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:20:58.741565 sshd[5445]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:58.754283 systemd[1]: sshd@15-165.232.153.54:22-147.75.109.163:33962.service: Deactivated successfully. Feb 13 20:20:58.758086 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:20:58.770364 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:20:58.779852 systemd[1]: Started sshd@16-165.232.153.54:22-147.75.109.163:33974.service - OpenSSH per-connection server daemon (147.75.109.163:33974). Feb 13 20:20:58.785448 systemd-logind[1441]: Removed session 12. Feb 13 20:20:58.848217 sshd[5470]: Accepted publickey for core from 147.75.109.163 port 33974 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:58.850088 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:58.861494 systemd-logind[1441]: New session 13 of user core. Feb 13 20:20:58.866797 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:20:59.111804 sshd[5470]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:59.121603 systemd[1]: sshd@16-165.232.153.54:22-147.75.109.163:33974.service: Deactivated successfully. Feb 13 20:20:59.124171 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:20:59.131453 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:20:59.139844 systemd[1]: Started sshd@17-165.232.153.54:22-147.75.109.163:33982.service - OpenSSH per-connection server daemon (147.75.109.163:33982). Feb 13 20:20:59.143519 systemd-logind[1441]: Removed session 13. Feb 13 20:20:59.201357 sshd[5482]: Accepted publickey for core from 147.75.109.163 port 33982 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:20:59.203448 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:59.218279 systemd-logind[1441]: New session 14 of user core. Feb 13 20:20:59.220597 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:20:59.383677 sshd[5482]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:59.391512 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:20:59.391866 systemd[1]: sshd@17-165.232.153.54:22-147.75.109.163:33982.service: Deactivated successfully. Feb 13 20:20:59.395401 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:20:59.397582 systemd-logind[1441]: Removed session 14. Feb 13 20:20:59.923570 kubelet[2579]: E0213 20:20:59.923383 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:20:59.930596 kubelet[2579]: I0213 20:20:59.930537 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" path="/var/lib/kubelet/pods/7c64b4ea-ed0c-443b-a61d-284300b0cf5b/volumes" Feb 13 20:21:00.836317 systemd[1]: cri-containerd-d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558.scope: Deactivated successfully. Feb 13 20:21:00.900447 containerd[1469]: time="2025-02-13T20:21:00.900368361Z" level=info msg="shim disconnected" id=d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558 namespace=k8s.io Feb 13 20:21:00.900447 containerd[1469]: time="2025-02-13T20:21:00.900436071Z" level=warning msg="cleaning up after shim disconnected" id=d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558 namespace=k8s.io Feb 13 20:21:00.900447 containerd[1469]: time="2025-02-13T20:21:00.900445421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:00.902240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558-rootfs.mount: Deactivated successfully. Feb 13 20:21:00.974226 containerd[1469]: time="2025-02-13T20:21:00.974158436Z" level=info msg="StopContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" returns successfully" Feb 13 20:21:00.977007 containerd[1469]: time="2025-02-13T20:21:00.976814506Z" level=info msg="StopPodSandbox for \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\"" Feb 13 20:21:00.977007 containerd[1469]: time="2025-02-13T20:21:00.976890791Z" level=info msg="Container to stop \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:21:00.985260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf-shm.mount: Deactivated successfully. Feb 13 20:21:00.994919 systemd[1]: cri-containerd-8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf.scope: Deactivated successfully. Feb 13 20:21:01.054901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf-rootfs.mount: Deactivated successfully. Feb 13 20:21:01.060947 containerd[1469]: time="2025-02-13T20:21:01.060744424Z" level=info msg="shim disconnected" id=8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf namespace=k8s.io Feb 13 20:21:01.061699 containerd[1469]: time="2025-02-13T20:21:01.061656531Z" level=warning msg="cleaning up after shim disconnected" id=8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf namespace=k8s.io Feb 13 20:21:01.061822 containerd[1469]: time="2025-02-13T20:21:01.061801886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:01.104972 containerd[1469]: time="2025-02-13T20:21:01.104772502Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:21:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:21:01.126460 containerd[1469]: time="2025-02-13T20:21:01.126293797Z" level=info msg="TearDown network for sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" successfully" Feb 13 20:21:01.126460 containerd[1469]: time="2025-02-13T20:21:01.126395157Z" level=info msg="StopPodSandbox for \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" returns successfully" Feb 13 20:21:01.261518 kubelet[2579]: I0213 20:21:01.261145 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q72db\" (UniqueName: \"kubernetes.io/projected/658449cc-7959-4012-af75-2a4bcfb174e4-kube-api-access-q72db\") pod \"658449cc-7959-4012-af75-2a4bcfb174e4\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " Feb 13 20:21:01.261518 kubelet[2579]: I0213 20:21:01.261219 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/658449cc-7959-4012-af75-2a4bcfb174e4-typha-certs\") pod \"658449cc-7959-4012-af75-2a4bcfb174e4\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " Feb 13 20:21:01.261518 kubelet[2579]: I0213 20:21:01.261253 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/658449cc-7959-4012-af75-2a4bcfb174e4-tigera-ca-bundle\") pod \"658449cc-7959-4012-af75-2a4bcfb174e4\" (UID: \"658449cc-7959-4012-af75-2a4bcfb174e4\") " Feb 13 20:21:01.278529 kubelet[2579]: I0213 20:21:01.276291 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658449cc-7959-4012-af75-2a4bcfb174e4-kube-api-access-q72db" (OuterVolumeSpecName: "kube-api-access-q72db") pod "658449cc-7959-4012-af75-2a4bcfb174e4" (UID: "658449cc-7959-4012-af75-2a4bcfb174e4"). InnerVolumeSpecName "kube-api-access-q72db". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:21:01.277983 systemd[1]: var-lib-kubelet-pods-658449cc\x2d7959\x2d4012\x2daf75\x2d2a4bcfb174e4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 13 20:21:01.278139 systemd[1]: var-lib-kubelet-pods-658449cc\x2d7959\x2d4012\x2daf75\x2d2a4bcfb174e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq72db.mount: Deactivated successfully. Feb 13 20:21:01.285865 kubelet[2579]: I0213 20:21:01.285764 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/658449cc-7959-4012-af75-2a4bcfb174e4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "658449cc-7959-4012-af75-2a4bcfb174e4" (UID: "658449cc-7959-4012-af75-2a4bcfb174e4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:21:01.286518 kubelet[2579]: I0213 20:21:01.286457 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/658449cc-7959-4012-af75-2a4bcfb174e4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "658449cc-7959-4012-af75-2a4bcfb174e4" (UID: "658449cc-7959-4012-af75-2a4bcfb174e4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:21:01.362135 kubelet[2579]: I0213 20:21:01.361978 2579 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/658449cc-7959-4012-af75-2a4bcfb174e4-tigera-ca-bundle\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:21:01.362135 kubelet[2579]: I0213 20:21:01.362033 2579 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q72db\" (UniqueName: \"kubernetes.io/projected/658449cc-7959-4012-af75-2a4bcfb174e4-kube-api-access-q72db\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:21:01.362135 kubelet[2579]: I0213 20:21:01.362047 2579 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/658449cc-7959-4012-af75-2a4bcfb174e4-typha-certs\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:21:01.606855 kubelet[2579]: I0213 20:21:01.604656 2579 scope.go:117] "RemoveContainer" containerID="d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558" Feb 13 20:21:01.674260 systemd[1]: Removed slice kubepods-besteffort-pod658449cc_7959_4012_af75_2a4bcfb174e4.slice - libcontainer container kubepods-besteffort-pod658449cc_7959_4012_af75_2a4bcfb174e4.slice. Feb 13 20:21:01.684183 containerd[1469]: time="2025-02-13T20:21:01.683749569Z" level=info msg="RemoveContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\"" Feb 13 20:21:01.696152 containerd[1469]: time="2025-02-13T20:21:01.696028964Z" level=info msg="RemoveContainer for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" returns successfully" Feb 13 20:21:01.702415 kubelet[2579]: I0213 20:21:01.696835 2579 scope.go:117] "RemoveContainer" containerID="d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558" Feb 13 20:21:01.702794 containerd[1469]: time="2025-02-13T20:21:01.702712008Z" level=error msg="ContainerStatus for \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\": not found" Feb 13 20:21:01.703940 kubelet[2579]: E0213 20:21:01.703705 2579 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\": not found" containerID="d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558" Feb 13 20:21:01.703940 kubelet[2579]: I0213 20:21:01.703776 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558"} err="failed to get container status \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\": rpc error: code = NotFound desc = an error occurred when try to find container \"d65e2bd9aa5300f3ac2e43bba34d7d584cfc8c00c77e2ce2d72c30ea9b999558\": not found" Feb 13 20:21:01.897553 systemd[1]: var-lib-kubelet-pods-658449cc\x2d7959\x2d4012\x2daf75\x2d2a4bcfb174e4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 13 20:21:01.926750 kubelet[2579]: I0213 20:21:01.926478 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658449cc-7959-4012-af75-2a4bcfb174e4" path="/var/lib/kubelet/pods/658449cc-7959-4012-af75-2a4bcfb174e4/volumes" Feb 13 20:21:04.405042 systemd[1]: Started sshd@18-165.232.153.54:22-147.75.109.163:42964.service - OpenSSH per-connection server daemon (147.75.109.163:42964). Feb 13 20:21:04.460362 sshd[5684]: Accepted publickey for core from 147.75.109.163 port 42964 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:04.462626 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:04.468189 systemd-logind[1441]: New session 15 of user core. Feb 13 20:21:04.476663 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:21:04.647521 sshd[5684]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:04.654174 systemd[1]: sshd@18-165.232.153.54:22-147.75.109.163:42964.service: Deactivated successfully. Feb 13 20:21:04.657525 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:21:04.658884 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:21:04.660777 systemd-logind[1441]: Removed session 15. Feb 13 20:21:07.922614 kubelet[2579]: E0213 20:21:07.922134 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:21:09.669942 systemd[1]: Started sshd@19-165.232.153.54:22-147.75.109.163:33150.service - OpenSSH per-connection server daemon (147.75.109.163:33150). Feb 13 20:21:09.737134 sshd[5790]: Accepted publickey for core from 147.75.109.163 port 33150 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:09.739486 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:09.750195 kubelet[2579]: I0213 20:21:09.749574 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:21:09.753491 systemd-logind[1441]: New session 16 of user core. Feb 13 20:21:09.758337 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:21:09.982644 sshd[5790]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:09.994922 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:21:09.995961 systemd[1]: sshd@19-165.232.153.54:22-147.75.109.163:33150.service: Deactivated successfully. Feb 13 20:21:10.000163 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:21:10.002831 systemd-logind[1441]: Removed session 16. Feb 13 20:21:13.924477 kubelet[2579]: E0213 20:21:13.922383 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:21:15.005906 systemd[1]: Started sshd@20-165.232.153.54:22-147.75.109.163:33160.service - OpenSSH per-connection server daemon (147.75.109.163:33160). Feb 13 20:21:15.071497 sshd[5901]: Accepted publickey for core from 147.75.109.163 port 33160 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:15.073640 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:15.082656 systemd-logind[1441]: New session 17 of user core. Feb 13 20:21:15.091638 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:21:15.311754 sshd[5901]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:15.322225 systemd[1]: sshd@20-165.232.153.54:22-147.75.109.163:33160.service: Deactivated successfully. Feb 13 20:21:15.322433 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:21:15.327921 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:21:15.332748 systemd-logind[1441]: Removed session 17. Feb 13 20:21:20.326778 systemd[1]: Started sshd@21-165.232.153.54:22-147.75.109.163:49888.service - OpenSSH per-connection server daemon (147.75.109.163:49888). Feb 13 20:21:20.432480 sshd[6019]: Accepted publickey for core from 147.75.109.163 port 49888 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:20.438576 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:20.451500 systemd-logind[1441]: New session 18 of user core. Feb 13 20:21:20.458631 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:21:20.692404 sshd[6019]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:20.698730 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:21:20.699416 systemd[1]: sshd@21-165.232.153.54:22-147.75.109.163:49888.service: Deactivated successfully. Feb 13 20:21:20.706093 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:21:20.708998 systemd-logind[1441]: Removed session 18. Feb 13 20:21:25.718547 systemd[1]: Started sshd@22-165.232.153.54:22-147.75.109.163:49890.service - OpenSSH per-connection server daemon (147.75.109.163:49890). Feb 13 20:21:25.812994 sshd[6149]: Accepted publickey for core from 147.75.109.163 port 49890 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:25.816193 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:25.824630 systemd-logind[1441]: New session 19 of user core. Feb 13 20:21:25.829566 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:21:26.104556 sshd[6149]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:26.118155 systemd[1]: sshd@22-165.232.153.54:22-147.75.109.163:49890.service: Deactivated successfully. Feb 13 20:21:26.123522 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:21:26.124981 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:21:26.134018 systemd[1]: Started sshd@23-165.232.153.54:22-147.75.109.163:49892.service - OpenSSH per-connection server daemon (147.75.109.163:49892). Feb 13 20:21:26.137269 systemd-logind[1441]: Removed session 19. Feb 13 20:21:26.204846 sshd[6171]: Accepted publickey for core from 147.75.109.163 port 49892 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:26.207189 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:26.220621 systemd-logind[1441]: New session 20 of user core. Feb 13 20:21:26.225566 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:21:26.546047 sshd[6171]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:26.561127 systemd[1]: sshd@23-165.232.153.54:22-147.75.109.163:49892.service: Deactivated successfully. Feb 13 20:21:26.568811 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:21:26.577625 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:21:26.584521 systemd[1]: Started sshd@24-165.232.153.54:22-147.75.109.163:49906.service - OpenSSH per-connection server daemon (147.75.109.163:49906). Feb 13 20:21:26.588789 systemd-logind[1441]: Removed session 20. Feb 13 20:21:26.635422 sshd[6202]: Accepted publickey for core from 147.75.109.163 port 49906 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:26.636381 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:26.642359 systemd-logind[1441]: New session 21 of user core. Feb 13 20:21:26.646647 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:21:29.467088 sshd[6202]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:29.500259 systemd[1]: Started sshd@25-165.232.153.54:22-147.75.109.163:47316.service - OpenSSH per-connection server daemon (147.75.109.163:47316). Feb 13 20:21:29.501042 systemd[1]: sshd@24-165.232.153.54:22-147.75.109.163:49906.service: Deactivated successfully. Feb 13 20:21:29.507924 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:21:29.517010 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:21:29.525673 systemd-logind[1441]: Removed session 21. Feb 13 20:21:29.609451 sshd[6265]: Accepted publickey for core from 147.75.109.163 port 47316 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:29.612151 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:29.622430 systemd-logind[1441]: New session 22 of user core. Feb 13 20:21:29.628555 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:21:30.620597 sshd[6265]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:30.636035 systemd[1]: sshd@25-165.232.153.54:22-147.75.109.163:47316.service: Deactivated successfully. Feb 13 20:21:30.643957 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:21:30.651906 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:21:30.661182 systemd[1]: Started sshd@26-165.232.153.54:22-147.75.109.163:47330.service - OpenSSH per-connection server daemon (147.75.109.163:47330). Feb 13 20:21:30.669188 systemd-logind[1441]: Removed session 22. Feb 13 20:21:30.739638 sshd[6296]: Accepted publickey for core from 147.75.109.163 port 47330 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:30.742359 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:30.760015 systemd-logind[1441]: New session 23 of user core. Feb 13 20:21:30.766659 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:21:31.046746 sshd[6296]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:31.054555 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:21:31.055340 systemd[1]: sshd@26-165.232.153.54:22-147.75.109.163:47330.service: Deactivated successfully. Feb 13 20:21:31.059281 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:21:31.064300 systemd-logind[1441]: Removed session 23. Feb 13 20:21:33.021870 kubelet[2579]: E0213 20:21:33.018949 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:21:35.936470 containerd[1469]: time="2025-02-13T20:21:35.936372095Z" level=info msg="StopPodSandbox for \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\"" Feb 13 20:21:35.937803 containerd[1469]: time="2025-02-13T20:21:35.936516379Z" level=info msg="TearDown network for sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" successfully" Feb 13 20:21:35.937803 containerd[1469]: time="2025-02-13T20:21:35.936533397Z" level=info msg="StopPodSandbox for \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" returns successfully" Feb 13 20:21:35.957117 containerd[1469]: time="2025-02-13T20:21:35.955556183Z" level=info msg="RemovePodSandbox for \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\"" Feb 13 20:21:35.957117 containerd[1469]: time="2025-02-13T20:21:35.955613607Z" level=info msg="Forcibly stopping sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\"" Feb 13 20:21:35.957117 containerd[1469]: time="2025-02-13T20:21:35.955714108Z" level=info msg="TearDown network for sandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" successfully" Feb 13 20:21:35.972635 containerd[1469]: time="2025-02-13T20:21:35.972501067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:35.973160 containerd[1469]: time="2025-02-13T20:21:35.972926506Z" level=info msg="RemovePodSandbox \"8d39827b29eb31e35ecd989b195f67e2539edb082afbb25f22b00817cdc58cdf\" returns successfully" Feb 13 20:21:35.974276 containerd[1469]: time="2025-02-13T20:21:35.973871439Z" level=info msg="StopPodSandbox for \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\"" Feb 13 20:21:36.071646 systemd[1]: Started sshd@27-165.232.153.54:22-147.75.109.163:47334.service - OpenSSH per-connection server daemon (147.75.109.163:47334). Feb 13 20:21:36.139483 sshd[6439]: Accepted publickey for core from 147.75.109.163 port 47334 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:36.145274 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:36.160152 systemd-logind[1441]: New session 24 of user core. Feb 13 20:21:36.171737 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.087 [WARNING][6432] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.087 [INFO][6432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.087 [INFO][6432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" iface="eth0" netns="" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.087 [INFO][6432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.087 [INFO][6432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.156 [INFO][6442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.156 [INFO][6442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.156 [INFO][6442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.170 [WARNING][6442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.170 [INFO][6442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.177 [INFO][6442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:36.184468 containerd[1469]: 2025-02-13 20:21:36.180 [INFO][6432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.185061 containerd[1469]: time="2025-02-13T20:21:36.184543616Z" level=info msg="TearDown network for sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" successfully" Feb 13 20:21:36.185061 containerd[1469]: time="2025-02-13T20:21:36.184588740Z" level=info msg="StopPodSandbox for \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" returns successfully" Feb 13 20:21:36.185584 containerd[1469]: time="2025-02-13T20:21:36.185290117Z" level=info msg="RemovePodSandbox for \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\"" Feb 13 20:21:36.185584 containerd[1469]: time="2025-02-13T20:21:36.185421374Z" level=info msg="Forcibly stopping sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\"" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.270 [WARNING][6463] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" WorkloadEndpoint="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.270 [INFO][6463] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.270 [INFO][6463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" iface="eth0" netns="" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.270 [INFO][6463] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.270 [INFO][6463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.331 [INFO][6475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.331 [INFO][6475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.331 [INFO][6475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.348 [WARNING][6475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.348 [INFO][6475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" HandleID="k8s-pod-network.e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Workload="ci--4081.3.1--6--670b8c47e7-k8s-calico--kube--controllers--56dc9cc855--j5dzk-eth0" Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.355 [INFO][6475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:21:36.364366 containerd[1469]: 2025-02-13 20:21:36.359 [INFO][6463] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb" Feb 13 20:21:36.365595 containerd[1469]: time="2025-02-13T20:21:36.365480089Z" level=info msg="TearDown network for sandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" successfully" Feb 13 20:21:36.414019 containerd[1469]: time="2025-02-13T20:21:36.413928712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:21:36.415835 containerd[1469]: time="2025-02-13T20:21:36.414099248Z" level=info msg="RemovePodSandbox \"e0f9fe7a780fbae659811a687c48bc54814c05f3be41bec35d555bc2ee27f4fb\" returns successfully" Feb 13 20:21:36.439644 sshd[6439]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:36.447469 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:21:36.448149 systemd[1]: sshd@27-165.232.153.54:22-147.75.109.163:47334.service: Deactivated successfully. Feb 13 20:21:36.452959 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:21:36.455449 systemd-logind[1441]: Removed session 24. Feb 13 20:21:40.921140 kubelet[2579]: E0213 20:21:40.921009 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:21:41.458676 systemd[1]: Started sshd@28-165.232.153.54:22-147.75.109.163:50308.service - OpenSSH per-connection server daemon (147.75.109.163:50308). Feb 13 20:21:41.508353 sshd[6564]: Accepted publickey for core from 147.75.109.163 port 50308 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:41.510937 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:41.516562 systemd-logind[1441]: New session 25 of user core. Feb 13 20:21:41.525637 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:21:41.691544 sshd[6564]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:41.695104 systemd[1]: sshd@28-165.232.153.54:22-147.75.109.163:50308.service: Deactivated successfully. Feb 13 20:21:41.701402 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:21:41.707292 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:21:41.709063 systemd-logind[1441]: Removed session 25. Feb 13 20:21:46.710651 systemd[1]: Started sshd@29-165.232.153.54:22-147.75.109.163:50314.service - OpenSSH per-connection server daemon (147.75.109.163:50314). Feb 13 20:21:46.784412 sshd[6670]: Accepted publickey for core from 147.75.109.163 port 50314 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:46.786323 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:46.796350 systemd-logind[1441]: New session 26 of user core. Feb 13 20:21:46.804962 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:21:46.987192 sshd[6670]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:46.993927 systemd[1]: sshd@29-165.232.153.54:22-147.75.109.163:50314.service: Deactivated successfully. Feb 13 20:21:46.998325 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:21:46.999829 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:21:47.001158 systemd-logind[1441]: Removed session 26. Feb 13 20:21:52.004695 systemd[1]: Started sshd@30-165.232.153.54:22-147.75.109.163:39834.service - OpenSSH per-connection server daemon (147.75.109.163:39834). Feb 13 20:21:52.073341 sshd[6772]: Accepted publickey for core from 147.75.109.163 port 39834 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:52.076085 sshd[6772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:52.081128 systemd-logind[1441]: New session 27 of user core. Feb 13 20:21:52.084542 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:21:52.241922 sshd[6772]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:52.246804 systemd[1]: sshd@30-165.232.153.54:22-147.75.109.163:39834.service: Deactivated successfully. Feb 13 20:21:52.250894 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:21:52.252213 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:21:52.253585 systemd-logind[1441]: Removed session 27. Feb 13 20:21:57.262988 systemd[1]: Started sshd@31-165.232.153.54:22-147.75.109.163:39840.service - OpenSSH per-connection server daemon (147.75.109.163:39840). Feb 13 20:21:57.318740 sshd[6907]: Accepted publickey for core from 147.75.109.163 port 39840 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:21:57.321220 sshd[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:57.332779 systemd-logind[1441]: New session 28 of user core. Feb 13 20:21:57.338649 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:21:57.528651 sshd[6907]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:57.535652 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:21:57.537496 systemd[1]: sshd@31-165.232.153.54:22-147.75.109.163:39840.service: Deactivated successfully. Feb 13 20:21:57.540786 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:21:57.544007 systemd-logind[1441]: Removed session 28. Feb 13 20:22:00.934813 kubelet[2579]: E0213 20:22:00.934604 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:02.555240 systemd[1]: Started sshd@32-165.232.153.54:22-147.75.109.163:56568.service - OpenSSH per-connection server daemon (147.75.109.163:56568). Feb 13 20:22:02.650399 sshd[7084]: Accepted publickey for core from 147.75.109.163 port 56568 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:22:02.653696 sshd[7084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:02.661723 systemd-logind[1441]: New session 29 of user core. Feb 13 20:22:02.668689 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:22:02.966057 sshd[7084]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:02.972827 systemd[1]: sshd@32-165.232.153.54:22-147.75.109.163:56568.service: Deactivated successfully. Feb 13 20:22:02.977789 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:22:02.979483 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:22:02.980933 systemd-logind[1441]: Removed session 29. Feb 13 20:22:06.926695 kubelet[2579]: E0213 20:22:06.926642 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:07.978632 systemd[1]: Started sshd@33-165.232.153.54:22-147.75.109.163:56570.service - OpenSSH per-connection server daemon (147.75.109.163:56570). Feb 13 20:22:08.030493 sshd[7104]: Accepted publickey for core from 147.75.109.163 port 56570 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:22:08.033121 sshd[7104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:08.040033 systemd-logind[1441]: New session 30 of user core. Feb 13 20:22:08.049879 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:22:08.275790 sshd[7104]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:08.284659 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:22:08.285022 systemd[1]: sshd@33-165.232.153.54:22-147.75.109.163:56570.service: Deactivated successfully. Feb 13 20:22:08.292466 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:22:08.293987 systemd-logind[1441]: Removed session 30. Feb 13 20:22:08.730261 systemd[1]: run-containerd-runc-k8s.io-807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d-runc.Yz3BQz.mount: Deactivated successfully. Feb 13 20:22:08.832933 containerd[1469]: time="2025-02-13T20:22:08.825698941Z" level=info msg="StopContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" with timeout 5 (s)" Feb 13 20:22:08.834188 containerd[1469]: time="2025-02-13T20:22:08.834005799Z" level=info msg="Stop container \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" with signal terminated" Feb 13 20:22:08.890394 systemd[1]: cri-containerd-807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d.scope: Deactivated successfully. Feb 13 20:22:08.890633 systemd[1]: cri-containerd-807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d.scope: Consumed 17.921s CPU time. Feb 13 20:22:08.916873 containerd[1469]: time="2025-02-13T20:22:08.916698822Z" level=info msg="shim disconnected" id=807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d namespace=k8s.io Feb 13 20:22:08.916873 containerd[1469]: time="2025-02-13T20:22:08.916824805Z" level=warning msg="cleaning up after shim disconnected" id=807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d namespace=k8s.io Feb 13 20:22:08.916873 containerd[1469]: time="2025-02-13T20:22:08.916834283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:08.921787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d-rootfs.mount: Deactivated successfully. Feb 13 20:22:09.023212 containerd[1469]: time="2025-02-13T20:22:09.022887497Z" level=info msg="StopContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" returns successfully" Feb 13 20:22:09.025183 containerd[1469]: time="2025-02-13T20:22:09.025147821Z" level=info msg="StopPodSandbox for \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\"" Feb 13 20:22:09.025384 containerd[1469]: time="2025-02-13T20:22:09.025201871Z" level=info msg="Container to stop \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:22:09.025384 containerd[1469]: time="2025-02-13T20:22:09.025215403Z" level=info msg="Container to stop \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:22:09.025384 containerd[1469]: time="2025-02-13T20:22:09.025227272Z" level=info msg="Container to stop \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:22:09.030060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a-shm.mount: Deactivated successfully. Feb 13 20:22:09.038359 systemd[1]: cri-containerd-30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a.scope: Deactivated successfully. Feb 13 20:22:09.070130 containerd[1469]: time="2025-02-13T20:22:09.070052734Z" level=info msg="shim disconnected" id=30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a namespace=k8s.io Feb 13 20:22:09.070576 containerd[1469]: time="2025-02-13T20:22:09.070345580Z" level=warning msg="cleaning up after shim disconnected" id=30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a namespace=k8s.io Feb 13 20:22:09.070576 containerd[1469]: time="2025-02-13T20:22:09.070371499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:09.072237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a-rootfs.mount: Deactivated successfully. Feb 13 20:22:09.114007 containerd[1469]: time="2025-02-13T20:22:09.113958851Z" level=info msg="TearDown network for sandbox \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" successfully" Feb 13 20:22:09.114007 containerd[1469]: time="2025-02-13T20:22:09.113995513Z" level=info msg="StopPodSandbox for \"30913cf0a70f54f141357f8bb436f8dc460148718f4b46913928fa24a96d376a\" returns successfully" Feb 13 20:22:09.157934 kubelet[2579]: I0213 20:22:09.157878 2579 scope.go:117] "RemoveContainer" containerID="807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d" Feb 13 20:22:09.160396 containerd[1469]: time="2025-02-13T20:22:09.160362765Z" level=info msg="RemoveContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\"" Feb 13 20:22:09.167337 containerd[1469]: time="2025-02-13T20:22:09.167176608Z" level=info msg="RemoveContainer for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" returns successfully" Feb 13 20:22:09.168733 kubelet[2579]: I0213 20:22:09.168685 2579 scope.go:117] "RemoveContainer" containerID="f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e" Feb 13 20:22:09.171750 containerd[1469]: time="2025-02-13T20:22:09.171712455Z" level=info msg="RemoveContainer for \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\"" Feb 13 20:22:09.178487 kubelet[2579]: I0213 20:22:09.177984 2579 topology_manager.go:215] "Topology Admit Handler" podUID="77a833ab-e101-4f0d-bf6c-5ea8e88b8b40" podNamespace="calico-system" podName="calico-node-9zzl5" Feb 13 20:22:09.181337 containerd[1469]: time="2025-02-13T20:22:09.179356011Z" level=info msg="RemoveContainer for \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\" returns successfully" Feb 13 20:22:09.181720 kubelet[2579]: I0213 20:22:09.181686 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-node-certs\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183833 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-lib-calico\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183900 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klpvj\" (UniqueName: \"kubernetes.io/projected/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-kube-api-access-klpvj\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183932 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-xtables-lock\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183952 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-net-dir\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183967 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-flexvol-driver-host\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.184733 kubelet[2579]: I0213 20:22:09.183985 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-policysync\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.185064 kubelet[2579]: I0213 20:22:09.184003 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-log-dir\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.185064 kubelet[2579]: I0213 20:22:09.184019 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-bin-dir\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.185064 kubelet[2579]: I0213 20:22:09.184033 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-lib-modules\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.185064 kubelet[2579]: I0213 20:22:09.184055 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-run-calico\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.185064 kubelet[2579]: I0213 20:22:09.184073 2579 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-tigera-ca-bundle\") pod \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\" (UID: \"39184bb0-cb2d-427c-beb7-c5eeacb43ad1\") " Feb 13 20:22:09.187051 kubelet[2579]: I0213 20:22:09.185856 2579 scope.go:117] "RemoveContainer" containerID="257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e" Feb 13 20:22:09.195425 kubelet[2579]: I0213 20:22:09.195296 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.206744 kubelet[2579]: E0213 20:22:09.206341 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="658449cc-7959-4012-af75-2a4bcfb174e4" containerName="calico-typha" Feb 13 20:22:09.215176 kubelet[2579]: I0213 20:22:09.214782 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.215437 kubelet[2579]: I0213 20:22:09.215209 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.215437 kubelet[2579]: I0213 20:22:09.215351 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.215530 kubelet[2579]: I0213 20:22:09.215513 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-policysync" (OuterVolumeSpecName: "policysync") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.215955 kubelet[2579]: I0213 20:22:09.215662 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.216048 kubelet[2579]: I0213 20:22:09.215688 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.216048 kubelet[2579]: I0213 20:22:09.215996 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.216048 kubelet[2579]: I0213 20:22:09.216012 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:22:09.217851 kubelet[2579]: I0213 20:22:09.217714 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-kube-api-access-klpvj" (OuterVolumeSpecName: "kube-api-access-klpvj") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "kube-api-access-klpvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:22:09.217851 kubelet[2579]: E0213 20:22:09.217772 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" containerName="calico-node" Feb 13 20:22:09.217851 kubelet[2579]: E0213 20:22:09.217795 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" containerName="calico-kube-controllers" Feb 13 20:22:09.217851 kubelet[2579]: E0213 20:22:09.217804 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" containerName="flexvol-driver" Feb 13 20:22:09.217851 kubelet[2579]: E0213 20:22:09.217812 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" containerName="install-cni" Feb 13 20:22:09.220363 kubelet[2579]: I0213 20:22:09.219426 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-node-certs" (OuterVolumeSpecName: "node-certs") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:22:09.221674 containerd[1469]: time="2025-02-13T20:22:09.221395220Z" level=info msg="RemoveContainer for \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\"" Feb 13 20:22:09.232920 kubelet[2579]: I0213 20:22:09.232855 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="658449cc-7959-4012-af75-2a4bcfb174e4" containerName="calico-typha" Feb 13 20:22:09.235397 kubelet[2579]: I0213 20:22:09.234962 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c64b4ea-ed0c-443b-a61d-284300b0cf5b" containerName="calico-kube-controllers" Feb 13 20:22:09.235397 kubelet[2579]: I0213 20:22:09.235098 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" containerName="calico-node" Feb 13 20:22:09.236026 kubelet[2579]: I0213 20:22:09.235979 2579 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "39184bb0-cb2d-427c-beb7-c5eeacb43ad1" (UID: "39184bb0-cb2d-427c-beb7-c5eeacb43ad1"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:22:09.254671 containerd[1469]: time="2025-02-13T20:22:09.253803809Z" level=info msg="RemoveContainer for \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\" returns successfully" Feb 13 20:22:09.255228 kubelet[2579]: I0213 20:22:09.255107 2579 scope.go:117] "RemoveContainer" containerID="807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d" Feb 13 20:22:09.255490 containerd[1469]: time="2025-02-13T20:22:09.255417771Z" level=error msg="ContainerStatus for \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\": not found" Feb 13 20:22:09.261241 kubelet[2579]: E0213 20:22:09.261020 2579 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\": not found" containerID="807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d" Feb 13 20:22:09.261241 kubelet[2579]: I0213 20:22:09.261091 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d"} err="failed to get container status \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\": rpc error: code = NotFound desc = an error occurred when try to find container \"807bf0b0efb95924c56a599796da24db83d6b74a91d2861fdca3dad96573d20d\": not found" Feb 13 20:22:09.261241 kubelet[2579]: I0213 20:22:09.261119 2579 scope.go:117] "RemoveContainer" containerID="f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e" Feb 13 20:22:09.261764 containerd[1469]: time="2025-02-13T20:22:09.261536616Z" level=error msg="ContainerStatus for \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\": not found" Feb 13 20:22:09.263335 kubelet[2579]: E0213 20:22:09.261905 2579 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\": not found" containerID="f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e" Feb 13 20:22:09.263335 kubelet[2579]: I0213 20:22:09.261938 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e"} err="failed to get container status \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8de9b5ae8dff21a86d00c16f20d8c208af058dcdc08ff14e9b57945bdc07e3e\": not found" Feb 13 20:22:09.263335 kubelet[2579]: I0213 20:22:09.261976 2579 scope.go:117] "RemoveContainer" containerID="257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e" Feb 13 20:22:09.265642 containerd[1469]: time="2025-02-13T20:22:09.265538206Z" level=error msg="ContainerStatus for \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\": not found" Feb 13 20:22:09.268566 kubelet[2579]: E0213 20:22:09.267476 2579 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\": not found" containerID="257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e" Feb 13 20:22:09.268566 kubelet[2579]: I0213 20:22:09.267527 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e"} err="failed to get container status \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\": rpc error: code = NotFound desc = an error occurred when try to find container \"257da9cee031533108aa7fd33fbfa739e6ed2b84b4f8ce9187974a9088de984e\": not found" Feb 13 20:22:09.276682 systemd[1]: Created slice kubepods-besteffort-pod77a833ab_e101_4f0d_bf6c_5ea8e88b8b40.slice - libcontainer container kubepods-besteffort-pod77a833ab_e101_4f0d_bf6c_5ea8e88b8b40.slice. Feb 13 20:22:09.288013 kubelet[2579]: I0213 20:22:09.287319 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-var-run-calico\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288013 kubelet[2579]: I0213 20:22:09.287413 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-lib-modules\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288013 kubelet[2579]: I0213 20:22:09.287442 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-policysync\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288013 kubelet[2579]: I0213 20:22:09.287471 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-node-certs\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288013 kubelet[2579]: I0213 20:22:09.287502 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-cni-net-dir\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288446 kubelet[2579]: I0213 20:22:09.287526 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-flexvol-driver-host\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288446 kubelet[2579]: I0213 20:22:09.287554 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-cni-log-dir\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288446 kubelet[2579]: I0213 20:22:09.287577 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmq6b\" (UniqueName: \"kubernetes.io/projected/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-kube-api-access-mmq6b\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288446 kubelet[2579]: I0213 20:22:09.287602 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-tigera-ca-bundle\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288446 kubelet[2579]: I0213 20:22:09.287628 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-xtables-lock\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287650 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-var-lib-calico\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287674 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77a833ab-e101-4f0d-bf6c-5ea8e88b8b40-cni-bin-dir\") pod \"calico-node-9zzl5\" (UID: \"77a833ab-e101-4f0d-bf6c-5ea8e88b8b40\") " pod="calico-system/calico-node-9zzl5" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287718 2579 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-node-certs\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287733 2579 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-lib-calico\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287751 2579 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-klpvj\" (UniqueName: \"kubernetes.io/projected/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-kube-api-access-klpvj\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287766 2579 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-xtables-lock\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288651 kubelet[2579]: I0213 20:22:09.287782 2579 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-net-dir\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287797 2579 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-flexvol-driver-host\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287811 2579 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-policysync\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287824 2579 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-log-dir\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287837 2579 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-lib-modules\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287852 2579 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-cni-bin-dir\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287866 2579 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-var-run-calico\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.288916 kubelet[2579]: I0213 20:22:09.287878 2579 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39184bb0-cb2d-427c-beb7-c5eeacb43ad1-tigera-ca-bundle\") on node \"ci-4081.3.1-6-670b8c47e7\" DevicePath \"\"" Feb 13 20:22:09.458587 systemd[1]: Removed slice kubepods-besteffort-pod39184bb0_cb2d_427c_beb7_c5eeacb43ad1.slice - libcontainer container kubepods-besteffort-pod39184bb0_cb2d_427c_beb7_c5eeacb43ad1.slice. Feb 13 20:22:09.458731 systemd[1]: kubepods-besteffort-pod39184bb0_cb2d_427c_beb7_c5eeacb43ad1.slice: Consumed 18.683s CPU time. Feb 13 20:22:09.582458 kubelet[2579]: E0213 20:22:09.581829 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:09.605680 containerd[1469]: time="2025-02-13T20:22:09.605578145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zzl5,Uid:77a833ab-e101-4f0d-bf6c-5ea8e88b8b40,Namespace:calico-system,Attempt:0,}" Feb 13 20:22:09.698577 containerd[1469]: time="2025-02-13T20:22:09.698427001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:09.698577 containerd[1469]: time="2025-02-13T20:22:09.698518931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:09.698577 containerd[1469]: time="2025-02-13T20:22:09.698541358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:09.700171 containerd[1469]: time="2025-02-13T20:22:09.700063144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:09.727237 systemd[1]: var-lib-kubelet-pods-39184bb0\x2dcb2d\x2d427c\x2dbeb7\x2dc5eeacb43ad1-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 20:22:09.727620 systemd[1]: var-lib-kubelet-pods-39184bb0\x2dcb2d\x2d427c\x2dbeb7\x2dc5eeacb43ad1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklpvj.mount: Deactivated successfully. Feb 13 20:22:09.727814 systemd[1]: var-lib-kubelet-pods-39184bb0\x2dcb2d\x2d427c\x2dbeb7\x2dc5eeacb43ad1-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 20:22:09.742630 systemd[1]: Started cri-containerd-4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda.scope - libcontainer container 4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda. Feb 13 20:22:09.788628 containerd[1469]: time="2025-02-13T20:22:09.787889434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zzl5,Uid:77a833ab-e101-4f0d-bf6c-5ea8e88b8b40,Namespace:calico-system,Attempt:0,} returns sandbox id \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\"" Feb 13 20:22:09.789254 kubelet[2579]: E0213 20:22:09.789222 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:09.810922 containerd[1469]: time="2025-02-13T20:22:09.810776103Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:22:09.834154 containerd[1469]: time="2025-02-13T20:22:09.834025996Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5\"" Feb 13 20:22:09.836595 containerd[1469]: time="2025-02-13T20:22:09.836557499Z" level=info msg="StartContainer for \"628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5\"" Feb 13 20:22:09.895528 systemd[1]: Started cri-containerd-628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5.scope - libcontainer container 628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5. Feb 13 20:22:09.928883 kubelet[2579]: I0213 20:22:09.928841 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39184bb0-cb2d-427c-beb7-c5eeacb43ad1" path="/var/lib/kubelet/pods/39184bb0-cb2d-427c-beb7-c5eeacb43ad1/volumes" Feb 13 20:22:09.941810 containerd[1469]: time="2025-02-13T20:22:09.941754682Z" level=info msg="StartContainer for \"628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5\" returns successfully" Feb 13 20:22:09.992632 systemd[1]: cri-containerd-628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5.scope: Deactivated successfully. Feb 13 20:22:10.028535 containerd[1469]: time="2025-02-13T20:22:10.028474322Z" level=info msg="shim disconnected" id=628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5 namespace=k8s.io Feb 13 20:22:10.028535 containerd[1469]: time="2025-02-13T20:22:10.028531155Z" level=warning msg="cleaning up after shim disconnected" id=628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5 namespace=k8s.io Feb 13 20:22:10.028535 containerd[1469]: time="2025-02-13T20:22:10.028539762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:10.158073 kubelet[2579]: E0213 20:22:10.157956 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:10.163855 containerd[1469]: time="2025-02-13T20:22:10.163788831Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:22:10.187459 containerd[1469]: time="2025-02-13T20:22:10.186707962Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e\"" Feb 13 20:22:10.188449 containerd[1469]: time="2025-02-13T20:22:10.187781960Z" level=info msg="StartContainer for \"3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e\"" Feb 13 20:22:10.224677 systemd[1]: Started cri-containerd-3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e.scope - libcontainer container 3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e. Feb 13 20:22:10.272220 containerd[1469]: time="2025-02-13T20:22:10.272165948Z" level=info msg="StartContainer for \"3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e\" returns successfully" Feb 13 20:22:10.725670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-628fa3f6aa4cbb77068bd1190570f0759848bc394777431bf33cc126d33ab9f5-rootfs.mount: Deactivated successfully. Feb 13 20:22:11.280055 kubelet[2579]: E0213 20:22:11.279445 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:11.493458 systemd[1]: cri-containerd-3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e.scope: Deactivated successfully. Feb 13 20:22:11.549078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e-rootfs.mount: Deactivated successfully. Feb 13 20:22:11.591539 containerd[1469]: time="2025-02-13T20:22:11.591453485Z" level=info msg="shim disconnected" id=3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e namespace=k8s.io Feb 13 20:22:11.591539 containerd[1469]: time="2025-02-13T20:22:11.591533173Z" level=warning msg="cleaning up after shim disconnected" id=3a8c6b23abd4c08cde4a3ff5255e42e9fd473d9f30b37625e2f1622e5741746e namespace=k8s.io Feb 13 20:22:11.591539 containerd[1469]: time="2025-02-13T20:22:11.591542817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:12.252242 kubelet[2579]: E0213 20:22:12.251186 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:12.283774 containerd[1469]: time="2025-02-13T20:22:12.283707045Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:22:12.309178 containerd[1469]: time="2025-02-13T20:22:12.308985077Z" level=info msg="CreateContainer within sandbox \"4aade734387727be9986a67dcdd8a25a16fd5684cd6cd4998062db26005ecbda\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e9d6accd1ea5500d4684dca9839842ac32b4a65bcffdb87f1a39ed236760742a\"" Feb 13 20:22:12.311723 containerd[1469]: time="2025-02-13T20:22:12.311657972Z" level=info msg="StartContainer for \"e9d6accd1ea5500d4684dca9839842ac32b4a65bcffdb87f1a39ed236760742a\"" Feb 13 20:22:12.355077 systemd[1]: Started cri-containerd-e9d6accd1ea5500d4684dca9839842ac32b4a65bcffdb87f1a39ed236760742a.scope - libcontainer container e9d6accd1ea5500d4684dca9839842ac32b4a65bcffdb87f1a39ed236760742a. Feb 13 20:22:12.428917 containerd[1469]: time="2025-02-13T20:22:12.428871595Z" level=info msg="StartContainer for \"e9d6accd1ea5500d4684dca9839842ac32b4a65bcffdb87f1a39ed236760742a\" returns successfully" Feb 13 20:22:13.189389 systemd[1]: Started sshd@34-165.232.153.54:22-218.92.0.167:56241.service - OpenSSH per-connection server daemon (218.92.0.167:56241). Feb 13 20:22:13.223155 systemd[1]: Started sshd@35-165.232.153.54:22-218.92.0.167:56521.service - OpenSSH per-connection server daemon (218.92.0.167:56521). Feb 13 20:22:13.260246 kubelet[2579]: E0213 20:22:13.259689 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:13.299617 systemd[1]: Started sshd@36-165.232.153.54:22-147.75.109.163:54924.service - OpenSSH per-connection server daemon (147.75.109.163:54924). Feb 13 20:22:13.393553 sshd[7446]: Accepted publickey for core from 147.75.109.163 port 54924 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:22:13.397246 sshd[7446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:13.403797 systemd-logind[1441]: New session 31 of user core. Feb 13 20:22:13.408034 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:22:14.380409 kubelet[2579]: E0213 20:22:14.380361 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:22:14.411564 sshd[7564]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:22:14.416343 sshd[7565]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.167 user=root Feb 13 20:22:14.680565 sshd[7446]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:14.694181 systemd[1]: sshd@36-165.232.153.54:22-147.75.109.163:54924.service: Deactivated successfully. Feb 13 20:22:14.706199 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:22:14.709344 systemd-logind[1441]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:22:14.710656 systemd-logind[1441]: Removed session 31. Feb 13 20:22:15.921384 kubelet[2579]: E0213 20:22:15.921163 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"